You are here

Viktor Prokopenya Invested $5 Million into Start-up - Banuba

Like
33
Viktor Prokopenya Invested $5 Million into the Outstanding Belarusian Augmented Reality Start-up - Banuba

In the very first year of its existence, Banuba, a start-up with R&D center in Minsk, has applied to six technological patents in areas, connected with computer vision and augmented reality. Instead of creating numerous apps, the company, that attracted investments from Viktor Prokopenya, focuses on the research and development of integrated technologies that will be further used in its own products.

Today's announcement that the Banuba start-up has attracted $5 mln of investments from the venture funds of the Gutseriev's family, Larnabel Ventures, and VP capital of Viktor Prokopenya, made the news.

“Augmented reality is a fast-growing mobile technology sector and Banuba will stay at the forefront in this innovative area. We’re glad to support the ambitious company in its drive to become a leader in the industry”, mentioned the Managing Partner of Larnabel, Said Gutseriev.

As it happens, the connection between the Belarussian IT-entrepreneur and the new ambitious project is even more close than simple investments. However, we’ll talk about that a little bit later.

 

Banuba's history: based on exp(capital) and the arousing interest towards augmented reality

“In the long run those who progress win. That’s why we struggle to do it. We don't use the rules already created, we want to establish our own. That’s why we deal with the challenging tasks, the ones that seem to be unsolvable”, explains Vadim Nekhai, Banuba's Development CEO and co-founder.

The idea to create Banuba appeared in Spring 2016, when the large companies, such as Apple, Facebook and Snapchat became more interested in augmented reality technologies.

“Today, the virtual reality market exceeds the augmented reality market, but according to numerous forecasts, in 5-10 years AR will outnumber VR up to three times”, says Vadim. The reasons are simple: augmented reality is right there in our pockets. Smartphone technology has sufficient infrastructure to introduce AR technologies and many basic AR-scenarios even today. Virtual Reality requires much more processing capacity, and its mobility is very limited.

The key members of the start-up are former exp(capital) employees that worked with Viktor Prokopenya. exp(capital), the fintech start-up, had been already operating and from the moment of Banuba’s inception, many developers got the opportunity to transfer their experience in machine-learning to the new challenges.

 

Today Banuba has a team of 30 people (the majority of them work in Minsk office), and soon the company has plans to expand twice more.

“We create research, private science, and in this regard, we continue exp(capital)'s strategy”, shares Vadim Nekhai. “The main purpose of our R&D department is to develop a primary technologies portfolio that will bring the possibility to create various AR-products. Today we’re mostly doing the case studies that demonstrate the company's potential”.

Vadim Nekhai, Banuba's CEO and Co-founder highlights that all Banuba's technologies are created around the video-camera.

“Today, we can say that camera is as important for a smartphone user as eyesight is for an ordinary human being. The amount of data, created by camera users, grows every day. Smartphones already can ‘read’ photos, have begun perceiving the image depth (for example, the depth sensing camera in Lenovo Phab 2 Pro), and in perspective, will learn to digitise and perceive 3D-objetcs, this technology will help to create images that will be maximum close to people's vision.

 

«We're not interested in particular technologies, only their complex»

The main focus of Banuba research is directed towards computer vision for real-time video-flow processing.

Processing is subdivided into three basic steps: detection, alignment and tracking. During the first step the app should detect an object on a video snapshot and identify the approximate coordinates. During the second step it should specify the detected object up to the pixel perfect level. The third step is relevant only for video processing. The app tracks how the object moves in a snapshot, how it changes and transforms.

“There are many companies, capable of solving the first step challenge. Only several can cope with the second step. Very few can manage with step three. We have solutions for all the three steps, but we continue working. There is always something that can be improved”, says the Banuba Development CEO. “The application pattern of this scheme can greatly vary. You can add virtual elements to the objective reality view. Besides, you can do vice versa: for example, detect facial expressions and gestures of a real user and transmit them to the avatar in a play mode.”

The start-up highlights that they are interested not only in separate technologies, but in an integrated complex or technological portfolio.

“Facial detection doesn't make augmented reality really useful”, mentions Vadim Nekhai. — “In order to make AR applicable in an every day user routine, the detection technique should include, additionaly, a whole list of other technologies. And we're currently working on creating this chain of technologies”.

 

Smile instead of Like, View Instead of Touchscrean

“It has been said that a face is the index of the heart: it can reflect the inner condition of a person even better than words or actions”, continues Vadim. “People are irrational creatures, a lot of our decisions are based on emotions, feelings and intuition. All these details are clearly or not very clearly reflected on our faces, and if the computer can learn to 'read a face', it will get the chance to build a new level of interaction with the user. Organic and natural.”

Banuba has already learnt to detect some facial expressions. It can be called initial 'face reading'. Now, the challenge is to learn how to detect emotions: “Not only to detect expressions, but to understand emotional state of a person”.

Computer science has a separate unit, affecting computing, that monitors devices, capable of detecting, reflecting, processing and simulating human experience, feelings and emotions. This unit is very close to deep learning, it is also used in face, emotions and gestures detection algorithms. Banuba specialists dive deeper into this area in their research.

“For many years Professor Paul Ekman has been conducting a comprehensive study about the lie phenomenon analysis. He ended up writting the book «Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage”, in which he mentions a lot of factors that can be used to find out whether a person tells the truth or lies”, says Vadim Nekhai. “One of this factors is microexpressions, meaning facial characteristics that can't be controlled. Many emotions are also reflected in such microexpressions. It means they can also be analysed and detected.”

The Banuba team states that if they can do it with sufficient accuracy, they will get many variants to use the technology. For example, the products’ success will be estimated not according to the number of ‘likes’ and ‘shares’ in social media, but according to emotions, felt by a person interacting with the product. Joy or anger, excitement or hatred, neutral expression — all this information will tell us more if we want to determine, to what degree the products fit the person.

Similar things have already been used in advertising. Large companies, including Coca Cola, Disney, explore facial expressions during the watching of commercials, they analyse whether people like the ad and what emotions it arouses at a different moments. Such techniques are also used in cars to understand if a person is distracted from the road condition and whether is he stressed or not. Smartphones can proactively recommend a work break, if the user is tired, because it can ‘read’ the signs of sleepy eyes or sluggish movements.

We have already experienced an increased interest towards a new type of advertising — video filters in mobile apps. Augmented reality in messengers, in fact, is the new generation emoticons. The mobile first principle is changed to camera first.

Another example of using emotions detection lies in electronic education. There are e-learning programs, capable of detecting whether a person has difficulties with understanding the material. The same technologies are used in electronic therapy — remote patient/doctor communication. You can consult a user according to the detected expression on his face. Another interesting application is in helping people with autistic disorders. Reflecting and understanding of the emotional status of other people.

“All these are the possibilities to create absolutely new UX, as well as outstanding innovative products”, says Vadim Nekhai. “One of our patents in this direction, called Line of Sight, is used to detect the face in video flow and we can detect a user’s sight. Accordingly, we can project the sight’s direction to the smartphone screen. In fact, in this case the sight replaces a touchscreen: a person can interact with the screen, choose menu options and move objects without using their hands, only with the help of sight.”

Imagine the situation, a person is cooking dinner, their hands are occupied or dirty, but they still can turn pages or change the sound level with sight. A musician can also turn published notes, without being distracted from the instrument. There won’t be a separate app built based on this technology, but in particular moments these things will work better than doing everything by hand.

Attempts of using such UX have been already made. Samsung, for example, in one of his devices introduced the possibility to turn pages of an electronic book by means of sight. Absolutely, such an interface has restrictions, because for now the developers can’t stop all the false response. Still, it’s very important information, based on which you can build micro cases.

Obviously, the future of technology lies in the mainstreaming of emotions detection and shifting away from clicks. This practice will drastically change the UX.

The Line of Sight patent and our other researches in this area are also connected with another huge trend — No UI is designed to solve a global problem of the existing products and technologies. Today, a person has to adjust according to the product, to undergo the used adoption process.

We can see wider and wider popularity of chat-bots, capable of replacing part of user interface, releasing us from excessive windows and solutions, i.e. cutting the time and effort needed for interface adoption. However, for now, these bots are absolutely heartless. But if they learn to understand a user's emotions, they can become more ‘empathetic’. Examples already exist: IBM Watson can detect sarcasm by analysing the text of messages.

If robots and computers will learn to behave ‘humanly’, reading and considering emotions, it will be a breakthrough. Further on, No UI will step up to a new level: technology will allow the product to adjust according to the user.

 

«Very applied» mathematics for deep optimisation

The world has already seen systems of computer vision and machine-learning, capable of processing video flow in real time. Nevertheless, they work only on powerful servers, which means they are inaccessible for ordinary users. Here we come to the second focus of Banuna’s research, algorithms optimisation aiming to constrict the backlash between low processing capabilities of accessible mobile devices and high resource intensity of AR technologies.

“We optimise machine learning algorithms and improve our technologies to work on mobile devices”, says Vadim Nekhai. “For optimisation we need mathematics, very particular and applied, not only the ones we learned in the university. Many developers come to us, namely because here we provide the possibility to apply mathimatical backgrounds in practice.

In order to optimise the apps for mobile devices, we need a deep understanding of how these mobile devices work. We need to know the peculiarities of their operating systems, processors and GPU. The Banuba team consists of highly qualified technical specialists. This is one of our competitive advantages. Under otherwise equal conditions we can achieve more than other companies.

For example, smartphones have realised CPU Frequency Scaling. The standard mode of working with a mobile device is part time application: open, work for several minutes and close. Working for a long period of time the operating system can forcebly lower the CPU frequency, to save the battery charge and protect the device from overheating.

It doesn't impact the work of most applications. But real time video processing requires a lot of computation power, and we can't let the OS to lower the bit-timing frequency, it affects the UX too greatly. Optimising algorithms with a perspective to lose the processor horsepower is not an option either. In order to build a quality app you need a lot of resources, and it doesn’t make sense to refuse them voluntarily. It required a lot of time to find a good way to bypass this artificial limitation, but we finally did it.

The start-up mentioned that the devices’ capacity also develops rather quickly. Today the majority of machine-learning algorithms work with GPU, separate graphic chips are in all smartphones. However, in real life this technology needs a special computing resource, customised to machine-learning. And such a product will soon enter the mass market. Intel, for example, have already been developing it.

“Banuba participates in a programme supporting NVIDIA researches, we discuss many topics and are aware of what to expect from the devices in years to come”, says the Minsk office CEO. “For example, we can freely forecast the wave of devices, capable of detecting the image depth. It can open very interesting opportunities for AR technology development.”

 

«Data is the new oil»: how machine-learning transforms product development

Banuba retains its famiy relations with exp(capital): the companies' offices are located in one building, the start-up has a direct access to the powerful computing resources of their colleagues. Vadim Nekhai considers this moment as one of the company’s main advantages. With the help of exp(capital)'s multiprocessing network Banuba's developers can accelerate the speed of machine-learning challenges solving several times over.

“Modern movement towards neural networks and deep learning is a very serious breakthrough. But we try to see beyond the popular directions, follow all interesting ideas and technological solutions, connected to machine learning. We try different options. We even use machine-learning techniques to the adjustment of machine-learning algorithms ,and we get meta-learning. We make it in the cloud — local providers  successfully cope with network loading.”

Banuba’s Development co-founder considers that very soon machine-learning can completely transform the product’s development. The standard scheme “idea first then data” will be replaced by data-centric development.

“Data is the new oil. If you have a large set of data, duely prepared and executed, you can develop the product idea exclusively on its basis”, states Vadim Nekhai. “Particular people in our team work on data sets preparation and conditioning. There are also several peculiar know-how’s in this area. In situations, where all companies get a set of hundreds of photos, sometimes we manage to get dozens of millions.”

“Where to choose the necessary data, how to prepare it, how to consider the rights for its usage — all these questions are very important. Any product manager today should realise how important is to gather maximum data and etract useful information from it. Based on this data you can build a 'smart user experience', personalise apps according to the person and the environment he/she is in. You can adjust the app according to user's location, analyse his/her behaviour patterns, detect main activity periods, and finally change the app cases according to the batterry charge level.”

 

High standards of graphical programming

The most obvious sphere of augmented reality distribution is the entertainment industry, which forms another point of Banuba's technological interests — graphical programming.

The start-up develops its own solutions for visualising objects in virtual scenes, Gamedev is dealing with similar things, but according to Vadim Nekhai, the experience of game developers is often not enough to cope with Banuba’s challenges:

“In Belarus there are many gamedev companies and we planned to find people with good VFX experiences who are able to solve Banuba's complicated graphical programming challenges. Still, it turned out to be a challenge: in general, gamedev people programme game logic, gameplay, and use the ready-made visual part from the Unity engine. As a result, their programmes spend most of the CPU time in the game engine's code, and the “author’s” code speed criticality becomes substantially lower.

Banuba’s Development CEO mentions that how augmented reality effects programming is an extraordinary area that requires deep mathimatical knowledge and a clear vision of how to transform an outer visual idea into a particular programming code. Besides, this code should work perfectly in different conditions, including on smartphones:

“All the developments are ours, we don't use engines, because there is no room for mistakes. That's why we're very glad to know experienced OpenGL developers, who understand, how to connect physical modelling and visualisation, know various optical ways of objects presenting. If you can programme water surface with all the refractions, with the help of OpenGL, or a flag, trembling in the wind or gleaming in the sunshine, then we have a lot of interesting work for you.”

 

Low-level optimisation and product-researching balance

The basic languages of Banuba's research department are C and C++ with a minimum of object-oriented approach. For user interfaces, we use traditional Objective-C for iOS and Java for Android. We highly appreciate the low-level optimisation experience. According to Vadim Nekhai, a significant part of code is written on a platform-specific assembler. To work with x86 processors you should have experience in SSE/AVX, for ARM programming to be able to know Neon architecture peculiarities. The same low level of code should be maintained while transmitting the algorithms to the GPU.

Herewith, according to the start-up co-founder, the key moment of getting into the Banuba Development's team lies beyond the deep knowledge in mobile hardware and machine learning experience:

“The main thing is to be capable of learning, be self-critical and be open to new things. You’ll have to not only fulfill the tasks from a list as in many outsourcing conveyers. Not to hide from the challenging non-standard problems, but to ask the question: ‘Why is it so?’ and find the reasons why, wherever they may be hidden.

“Many people get a really unique experience by working in Banuba. They face absolutely new spheres: iOS- and Android-developers improve their skills in graphical prgramming, machine-learning specialists learn the peculiarities of mobile hardware.”

“We try to keep the balance between research work and product thinking. The result of our research is not always positive, this brings no good thinking from the product owner’s perspective. Looking from the reseracher’s perspective, a negative result is also experience and knowledge. We try a lot of things, and many of them don't work. But we’re not interested in ordinary simple tasks. To pave our way through challenges and drive progress in the long run is much more important than to focus on immediate success”, concludes Vadim Nekhai, CEO and co-founder the start-up that has all the chances to become well-known in the technological world.

 

Viktor Prokopenya has no doubts about it.

“We're sure that Banuba will manage to bring fresh ideas and innovative solutions to the augmented reality sector and are happy to support this company”, — states Viktor after closing the investment round in $5 mln.