Share This

Julian Jackson (Founder and Director, VisionBridge) spoke to Charles Leclercq (CEO, ARx) about the ARxVision, a wearable device that captures the world around us through audio and artificial intelligence to empower blind and low-vision individuals. You can read the Eye News review of the device here.

 

 

What motivated you to invest time, money, and resources in the development of ARxVision? What issue or problem are you trying to solve?

I have always wanted to empower the blind and low-vision communities to access new opportunities in an easy and engaging way. By harnessing our ARxVision technology. I have always been interested in devising technology to assist the blind and low-vision communities to live a more accessible life, thanks to artificial intelligence.

I joined the team as CEO in 2020 with a background in audio augmented reality (AR). I have a background in video games. From 2009 to 2014, I worked for Ubisoft as a designer and creative manager on franchises like Assassin’s Creed, Tintin and Prince of Perisa and worked in research and development on early prototypes of the Microsoft Kinect and Nintendo WiiU.

While making video games, I considered how I could use that technology to solve real-life problems. I joined the Future Experience Technology team at BBC Research and Development to lead the AR as a public service team which explored the potential of AR.

During the COVID-19 lockdown, our team created an AR version of the famous Glastonbury Festival, where users could place virtual stages in their homes, gardens or parks and discover new artists by walking around their own curated space. As one would approach a stage, the music from other stages would fade away as the main stage would become louder. This experience highlighted the potential of creating audio experiences combined with GPS and spatial data, also known as audio augmented reality.

I then joined the Horus Sight startup, which was founded in 2015, and rebooted the company into ARx. Since then, we have been working to create the device you see today.

What do you see as the key challenges in delivering a blind and low-vision wearable that could measurably make a difference to people’s lives and what is so unique about your technology?

AI and image recognition technologies are more capable than ever. The range and type of information we can deliver to a user such as text, semantic understanding, light detection, obstacles, etc. is very broad. However, exposing this information to the user without sufficient user experience considerations can result in hard-to-use interfaces, a bit like operating an overly complicated dashboard. In this case the challenge is to deliver a product with an interface that provides value to the user. When the opportunity cost of using the product is greater than the value it will deliver, the product fails.

To me, a good example of such product failures that I have experienced is with the first versions of city shared bikes: the interface to hire the bike is sometimes so hard to use and the process takes so long that when you are in a rush, you are better off using other modes of transport that are not green. The shared bike fails to deliver value because the service to access it was poorly designed.

So it is really important not to be overwhelmed by what technology can offer and make sure that the manner in which a feature is accessed does not impact negatively on the user-experience. Every day we get requests from the community to add features like banknotes, light or colour detection and while it is very tempting to include these specific features, we often find that we haven’t found a meaningful way to integrate a feature without damaging the user experience. In that case we believe we must do more design thinking before we can include the said feature. Along the way we have found that the best way to integrate design features is to understand the context in which the user operates.

In terms of hardware, we have decided to use a wired device to remove the pain of having to charge the headset, as well as the pain to pair over Wi-Fi or Bluetooth. It’s a trade-off, but it reduces the opportunity cost of accessing the value this type of technology can deliver.

In terms of software, the ARx app started with modes, similar to what you would expect from Envision AI or SeeingAI. Our trials and user research highlighted the opportunity to use AI to anticipate which mode the app should be in without the need for the user to select the modes. For example, the scene description mode in ARx now detects faces and reads short text when visible, this way users do not have to switch to the said face mode to scan a face, it’s readily available to users when the context highlights the need for it without any friction or additional operation interactions required from the user.

Do you believe in the ‘perfect assistive tech solution’, or do you feel there are always compromises to be made along the research and development process?

I believe that one day there will be a transcendental computer interface that will enable humanity to access tech without any barriers, which will mean that our imagination will be completely in phase with computers. When imagination is in control it is both scary and exciting: currently, we process thoughts and decide to convert them into real actions, that’s what happens when we use a touch screen or move a mouse pointer and click. When our imagination will immediately be in control without processing, who knows what’s going to happen? At the same time, the opportunities will be incredible because anything will be achievable for anyone with the ability to generate thoughts.

This is as far as I can think in terms of a perfect AT solution. And I do believe ARx gets us one step closer to this idea of perfection.

Now to get there, there is a lot of work and we do have to compromise all the time.

First, the winning device will have to be compatible with Android and iOS smartphones because they are incredibly powerful, mainstream, and easy to develop for. Being swiftly compatible with these OS requires a lot of work and we do have to compromise on a number of things such as permissions, connectivity etc. that all impact the user experience.

In general terms, I believe that it’s important to follow a definition of perfection in research and development but as soon as you enter production (especially as a startup) you do have to carefully consider the ecosystem in which your innovation will live and grow and that’s when you have to compromise.

We do have lots of ideas that are inspired by community feedback but very often we just have to compromise and say no because it is just not realistic at this moment in time. However, it helps us evolve our idea of perfection and how to get there.

Where do you envisage your wearable being in three years' time? (e.g. general level of awareness/understanding, market reach, traction, exit strategy?)

Long-term, we want ARxVision to be the fundamental tool for the blind and low-vision community and a staple in the health community at large.

As ARxVision technology continues to advance, we see the brand becoming the number option on the market for the blind community.

As we continue to advance our technology, we believe ARxVision technology can be used by more than just blind individuals, and impact other industries such as healthcare, agro-biotech, and entertainment / sports.

What are your thoughts about collaboration with other tech developers and researchers and would there be any circumstances or reasons for reaching out to them?

With the help of our university partners, we can also use this technology to conduct more research and ultimately increase funding for the blind community.

Aira is a visual interpreting service. Currently, Aira customers can access the services provided by the company via their smartphone, but they must hold the phone with their hands. We have collaborated with the Aira team to build an ARx compatible version of Aira where users can keep their phones in their pocket and benefit from the same experience “hands-free”.

Over the past two years we have partnered with universities in the UK and US such as Virginia Tech’s Human IMPAC-T Lab to uncover the potential of bone-conduction and with the Virginia Tech’s Music Mind Machine Lab, together we explored the meaning of audio augmented reality with and aim to set a standard and definition for the industry.

There are a bunch of secret project and partnerships that we are currently working on, some involve a new way to benefit from wayfinding and another gives the hands-free experience to one of the most popular app for bvi.
Regarding an iOS update, we want our device to be compatible with all users' devices. We are working on this!

Can you describe what contribution has been made by vi patients (plus any others) / trial feedback to the development of your solution?

The community has been very active and responsive with us, and this has enabled us to learn and react quickly. Working with users is now part of our daily process.

We consistently organise testing sessions and record feedback, we then build giant boards and do a card sorting exercise helping us to spot patterns in the feedback. We then prioritise and adjust our roadmap to meet user needs.

We are so grateful to individuals like Glenn Tookey (Sight and Sound Technology), Steve Nutt, Saqib Shaikh (Microsoft), Peter Bosher, Edward Green, Julian Jackson (VisionBridge), Owais Nawas (CIC Vision Ability) and Georgina Joyce who have been early-stage testers and helped us understand their perspective and needs.

We also learned from institutions like the Chicago Lighthouse which were very generous with their time.

Can you describe the journey you have made including conceptualising, funding and investments, research and development process, prototype creation, manufacture, trialling / testing, and market launch.

Our strategy to build the first prototype of ARx was to do a mix of user research, competitive analysis and rapid, “quick and dirty” prototypes. This is because we wanted to get it in users’ hands early and learn quickly. In that phase, we began to understand the fundamentals of user experience, and user interface but also how to collaborate with the community. There were a series of BETA launches until earlier this year when we felt confident about both the user experience and stability of the product across the Android ecosystem. We’re now well into our soft launch and user feedback is positive - we’re thrilled.

Feedback is so important to us that we have introduced the ARxAcademy, it is a way for users to make product feature suggestions online. It is currently available on our website, but we are working on integrating this feature into our app.

Funding-wise, ARx is backed by 5Lion ventures based in NYC. We've been working closely with 5Lion ventures on a weekly basis. The 5Lion team adds tremendous value, they’ve helped ARx with product and business strategy as well as a tremendous network. Together we have overcome mountains.

Now that many low-vision technologies are becoming more “eye condition / disease-specific” and “task-oriented”, how might this trend impact on your own future tech developments?

ARx is designed to work with and augment other solutions. For example, we see a future where it will be possible to use a white cane to point at things for ARx to detect.

In addition, while ARx has a strong focus on utility, we are starting to see demand for elevating experiences beyond utility, for example in the field of culture or gaming. Imagine the ARx AI assistant providing art galleries and museum tours instantly or reading playing cards to the user.

We want ARx to become like a superpower, and an addition to any specific eye condition solution!

Can you explain the relevance and importance of ARx in a healthcare setting?

ARx is also about providing independence to vision impaired individuals.

We’ve learned that too many medical appointments were being cancelled or not attended because patients found it too hard to commute to the hospital or healthcare facility. The technology challenge is that traditional GPS does not work well when targeting your final destination or during transitions to indoors. This problem is famously known as "the last 15 feet” challenge: when your GPS successfully takes you to an address, how do you find the right Uber car, the door number in a hotel or floor level in a hospital?

We are collaborating with Dr John Ross Rizzo on technology and experience that will guide patients from their homes to precise destinations. This project is using technology that relies on computer vision rather than GPS technology which means it makes use of the ARx headset's cameras, providing hands-free access to the technology.

Another example of this is the object search mode already available in the ARx app, one of our users explained how it was helpful for them:

"The feature I liked the most is the object searching one. Although I pretty much know where everything is in my room and around my house it was fun to find out something I could hear would let me know it’s there before my cane or part of my body came into contact with it. The chair finding one I believe is excellent, there are many times I walk into a room and swing my cane around like a maniac trying to find a chair but with that feature, I am able to find it independently."

We hope that the concept of providing finer guidance than GPS can will help the community to become more independent and access the healthcare they need and deserve to access.

Many thanks to Charles Leclercq for speaking to us.

 

 

COMMENTS ARE WELCOME

Share This