Ailments like Essential Tremor, a disease that causes involuntary trembling of head, hand, and voice, and Parkinson’s disease, which causes uncontrolled movements in different parts of the body, affect the everyday life of millions of people around the world, turning easy and yet fundamental actions into tasks that are difficult to achieve or to complete efficiently.

Essential tremor, a neurological disorder that commonly has a hereditary nature, is thought to originate from the Vim nucleus of the thalamus — a part of the brain responsible for coordinating and controlling muscle activity.

In Singapore, Parkinson's disease is the second most common neurological disease, and afflicts about three in every 1,000 people aged 50 and above, and is estimated there are between 4,000 to 5,000 such patients here.

An Israeli startup, Insightec, says they have discovered a way to alleviate the symptoms of Parkinson’s tremors in a way that is safe and effective. To reduce these symptoms, Insightec has developed Exablate Neuro, a treatment that allows neurosurgeons to perform an incisionless brain surgery to alleviate tremor associated with Essential Tremor and Parkinson’s disease.

The high-intensity focused ultrasound beams, that pass through skin, muscle, fat, and bone, are used to generate heat and ablate the Vim point, leading to an overall reduction of tremor, Insightec’s VP of Marketing Xen Mendelsohn Aderka said.

This procedure is guided by MRI, used as the eyes of the treatment, to focus on the region with extreme precision in order to prevent healthy adjacent tissue from being burnt, while at the same time monitoring the thermal variation in the area in question, she explained.

On treatment day, the patient’s scalp is shaved and cleaned completely so as to avoid any deflection of the energy from the targeted point during the procedure. Patients lie on the treatment bed inside the imaging scanner, and a helmet-like frame, that emits the beams — is placed by the surgeon in the right position. The targeted area is identified several days before the treatment via a CT scan to detail the shape, density and thickness of the skull.

Then, preliminary MRI images are taken to identify the precise location of the Vim point. Prior to the actual treatment, low — gradually increasing — energy sound-waves are directed towards this area to adjust the focal point and verify the target position’s accuracy.

This way, the surgeon can assess potential side effects, communicated by patients themselves, kept conscious during the procedure, which is conducted without anesthesia.

Eventual side-effects of the sound-waves can include tickling in specific areas, and are used as an indicator of whether to adjust the focus of ultrasound waves. Also, the surgeon might ask the patient to perform tasks like drawing circles on a board to confirm the target’s accuracy. Once all the preliminary tests are carried out, more energy is applied to the target, where temperatures can reach up to 60 degrees, causing a permanent lision of the targeted tissues. For that reason, the helmet is equipped with special tubes with cold water circulating continuously, so as to cool the patient’s skull during the procedure. The process is monitored using MRI imaging, and is analyzed by Insightec’s software, which gives real-time feedback of temperature changes in the area under treatment. The software also directs the energy specifically to the target, sparing healthy tissue.

After the treatment, MRI is again used to confirm the ablation area. The whole procedure lasts between two to three hours, with patients being discharged on the same day, Mendelsohn Aderka said.

In addition, she said, because the treatment is non-invasive, there is no risk of infection, and there is rapid recovery, as there is no collateral tissue damage — leading to minimal complications. The company claims that most side effects, which could include tingling in fingers and tongue and losing the sense of balance for a time, are moderate in severity and generally disappear within one to two months following treatment, the company says.

So far, “the technology is being used in over 50 medical centers around the world, with over 2,000 neuro-patients treated,” Mendelsohn Aderka added. When asked about competitors in the market, Mendelsohn Aderka explained that there is no other company that can perform focused ultrasound incisionless brain surgery. The alternatives are “medical-surgical interventions, such as drugs, and deep brain stimulation,” she said, adding that Insightec’s technology represents an alternative to these treatments, which are not always suitable or desired by patients.

Insightec was founded in 1999 and has its corporate headquarters in Tirat Carmel, Israel, with global offices in Dallas, Miami, Tokyo, and Shanghai. According to data compiled by Start-Up Nation Central that tracks the Israeli tech scene, the company has raised $307 million.  Investors include GE Healthcare and VP Koch Disruptive Technologies. 

Falls and injuries related to falls are common among older adults. Globally, one in three adults above 65 years old falls once a year. Falls are not only associated with greater morbidity and mortality in the older population, but are also linked to reduced overall functioning and with early admission to long-term facilities. Reducing falls risk in older adults is therefore an important public health objective. In Singapore, falls are a leading cause of injury among older adults. According to the National Registry of Diseases Office (NRDO) of Singapore, the crude incidence rate of unintentional falls in year 2012 was 277.7 per 100,000 for adults aged 60 years and older. The incidence rate increases sharply with age. Many of these falls happen at home but it is also possible for senior staff at the office. Therefore, steps must be taken so that factors which cause a person to fall are reduced or removed.

The accidental falls are a serious issue. If it is unnoticed, then it becomes fatal. An automatic fall detection device for monitoring the daily activities of a person when they encounter a fall could be the best solution. When an elderly falls the automatic fall device will then send an alert to the particular person’s family member, caretaker or an administrator in the office in order to get an immediate assistance especially in the confine of a home where falls are most often occurs or at the office when senior staff works alone.

Vayyar Imaging the global leader in 3D sensor imaging technology that makes it possible to see through objects, recently launched the Walabot HOME. Poised to radically change the future of digital health monitoring, Walabot HOME detects if a person has fallen and automatically places a call for help, without requiring any wearables. Walabot HOME is Vayyar’s flagship product in a new line of smart home devices being developed to ensure seniors stay connected when it matters most: in case of emergency.

“People wants to feel comfortable in their homes or office without the burden of needing to wear a pendant or medical alert device, but still wants the security of knowing that they can get help if they need it,” Vayyar Co-founder, CEO, and Chairman Raviv Melamed said in a statement. “Walabot Home is so effective because people can set it up and then relax, feel secure in the knowledge it’s there just in case.

Walabot HOME is easy to install and does not require further action once it’s been setup. The device uses advanced, low-power radio wave technology, similar to Wi-Fi, instead of cameras, to monitor individuals’ movements. This ensures occupants maintain their privacy, especially in locations where falls are more likely to happen, such as bathrooms. Walabot HOME also works in a wide range of conditions that cameras cannot, including steam and darkness, and can sense through objects like curtains and glass walls. An accompanying mobile app for iOS and Android lets you control the device. 

It’s best to think of Walabot Home as sort of a central nervous system that keeps track of a person as they move throughout the home. Originally designed to spot falls in the bathroom, the service can now be implemented throughout the entire home.

This provides a number of advantages. For one, it allows Walabot to sense a person through walls and curtains that would otherwise obstruct a camera. Second, it can monitor areas that other systems can’t, including the bathroom, without requiring any sort of wearable device. Finally, it’s far less invasive than a camera-based system that requires a person to surrender their privacy in order to ensure their safety. Family members and caregivers can receive alerts of a fall through the Walabot Home mobile app for iOS and Android. The app also allows for two-way communications. ​ You want to be there for your loved ones or staff— including those who are up there in age — no matter what, so you can help prevent the worst from happening. For times when you can’t be there, however, Walabot HOME can serve as your eyes. This smart home device that senses if a person has fallen in their home or office and then calls for help has recently received expanded capabilities to monitor the well-being of older people.

Artificial intelligence is generating lots of buzz in other verticals. I would like to explore how AV vendors and Integrators can apply AI to their AV projects. A case study from vendors such as Avaya and Harman applying AI. A decade ago, Steve Jobs introduced the iPhone by explaining why it didn’t include a stylus. “We’re going to use the best pointing device in the world:” our fingers,” he said. “We’re born with 10 of them.”

We’re also born with a voice which is rapidly emerging as another user interface (UI), including for pro AV systems. That’s largely because of advances in artificial intelligence (AI), which keeps getting better at understanding people no matter how heavy their accent or when they use everyday terms instead of industry jargon. That capability often is referred to as “natural-language understanding” or “natural-language processing.”

Another reason is that people’s experiences as consumers set their expectations about what’s possible and preferable at work. The iPhone, for example, introduced a lot of people to concepts such as gesture control and, a few versions later, speech-powered virtual assistants. This familiarity sets the stage for pro AV to tackle challenges such as the bewilderment that people get when walking into an unfamiliar conference room and trying to figure out how to turn on the projector, connect their laptop or lower the blinds.

In April of 2018, Harman partnered with IBM Watson to develop what they call “voice-enabled cognitive rooms” for verticals such as healthcare and hospitality. The solutions began shipping in 2018 and included Harman soundbars embedded with IBM Watson’s AI technology, which lets people simply talk to the equipment to get information or get it to do something.

For example, instead of using a hotel room’s thermostat, guests could say, “Turn up the heat” or “Turn on the air and set it to 20.” Or instead of using the TV remote, they could say, “Turn on Channel NewsAsia.”

“It’s all about making it simple and easy for guests,” says David McKinney, vice president of Harman’s Hospitality Customer Solution unit. “They don’t have to learn the environment. They don’t have to learn a certain terminology. It’s a natural-language type approach to it.”

What’s the business case?

For hotels and other businesses, a big part of AI-powered AV’s appeal is that it helps them save money and increase staff productivity. For example, convention centres, libraries and other large venues use digital signage to help people find their way around on their own. So they save money because they don’t need as many, or any, staff to help with way finding. AI has the potential to extend that efficiency to many other areas. Suppose a hotel room has a smart speaker such as Amazon Alexa or Google Home, and it’s connected to multiple departments including maintenance and housekeeping. Now when guests say, “I need more towels,” or “There’s no hot water,” the system can automatically alert staff – but without the need for additional staff at the front desk to field and reply those calls. The scenario also is an example of one way AV firms can add value: by identifying tasks that can be automated. For instance, an AV consultant could analyse the front desk’s inbound calls to determine the 25 most common guest inquiries and then develop an AI solution capable of fielding and routing them without staff involvement. If that analysis also shows how many personnel hours that would save, it also would help justify the project’s budget.

That type of analysis also highlights how AI ties in with another buzzword: big data. Suppose the AI is embedded in wayfinding digital signage, and people keep asking about the same half-dozen places. That could point to a need to update the signage content to anticipate those questions so people no longer feel a need to ask them. It also could identify business opportunities. For example, if hotel guests often ask their room’s smart speaker whether there’s an Italian or Indian Restaurant nearby, maybe it’s time to add one to the property. “There’s a lot of data that comes out of how these systems are used, [such as] what sorts of commands are coming in,” McKinney says.

Another business driver involves brand reputation. For example, the AI system could be programmed to recognize words that indicate a person’s emotion. If it’s negative, the system could alert a staff member to resolve the problem- all without the person asking for help, and avoiding experiences that lead to negative reviews and lost business. “If someone says turn on the expletive lights, then you can tell they’re not having the best experience,” McKinney says. “The system could have the customer service team call the guest and make sure they can solve those issues.”

AT YOUR SERVICE!

In the enterprise, AI also could help integrators, vendors and their clients provide better user experiences while lowering support costs. For example, in October, 2018, Avaya announced the A.I. Connect initiative to develop AI solutions for applications such as contact centres and unified communications. An enterprise AV/IT help desk is a contact centre, so it’s worth looking at how AI use cases from other contact centre applications could be adapted. One A.I. Connect partner is Nuance, which specializes in natural language understanding for applications such as interactive voice response (IVR) phone systems and virtual assistants. Nuance has discussed how an AI platform could ingest all of the manuals. FAQs and other collateral for a product and use that to power a virtual assistant to support that product.

In consumer-facing contact centres, virtual assistants often can handle up to 80% of inquiries. So if AV vendors, integrators or their customers can achieve comparable automation, it would free up AV/IT staff to focus on other tasks.

“The variety of use cases for applying AI to improving customer experiences is somewhat staggering,” says Eric Rossman, Avaya vice president, alliances and partnerships. “Companies that are focused on implementing digital channels are looking heavily towards expert systems-based chatbots and virtual assistants, which rely upon semantic analysis natural language speech recognition and rule-based pattern matching capabilities.”

Remember the example of a hotel guest who swears about turning on the lights? The AI identified that frustration using sentiment analysis, which a help desk virtual assistant could use to determine that it’s time to transfer the call to a human. “Sentiment analysis offers insight to the effectiveness of the customer interaction based on speech patterns, timing, volume and key words and phrases used,” Rossman says. “[It has] reached a level of sophistication thanks to AI techniques, providing real-time feedback to the agent as to how receptive a caller may be to the overall tone and tenor of the conversation.”

Of course, virtual assistants won’t be able to handle every inquiry, especially technically complex ones. In those cases, AI still could play a role by taking over some of the work that help desk staff typically do during a call. One possibility is listening on the call for certain keywords, such as product names. “Being able to proactively place guidance and related resources in the hands of the agent without them having to manually search knowledge bases and other internal sources for those materials only makes the customer interactions go smoother and flow more naturally,” Rossman says. “This ‘agent augmentation’ capability can easily leverage AI-enabled applications that data mine the wealth of knowledge bases, help desk tickets, even internal video training and recorded webinars that a company may have, learning to identify common themes and recurring answers that can form ready-made results for both automated and human-assisted interactions.”

AV also could adapt voice biometrics, which some contact centres and virtual assistants use to authenticate users so they don’t have to remember a PIN or password. One possibility is a conference room where the AI identifies each presenter by voice and automatically downloads their content from the cloud to the projector or display. That would alleviate the common frustration of trying to figure out an unfamiliar AV system.

New Skills required?

Some of these scenarios might seem a bit outside of pro AV’s traditional wheelhouse. But so are energy efficiency, digital signage content creation and the Internet of Things (IoT), which are just three examples of areas that some AV firms have expended into. For instance, integrators selling Harman-IBM Watson solutions have to hire, say, speech scientist to design and support voice-powered systems. Instead, they can focus on installing mics and loudspeakers. “We’ve built software applications to enable people to install and make it easy to set up a mass deployment,” McKinney says. “[For] integrators doing those sorts of control systems already, a lot of their skill sets can port into that.” AI also could give integrators and end users new ways to maximize the effectiveness and ROI of traditional AV systems. In retail, for example, AI could analyse camera feeds to determine how certain demographics react to certain content on digital signage. “We are seeing a growing number of retailers either adding or looking to add audience measurement technologies to serve two functions,” says Jason Cremins, founder and CEO of Signagelive, which is working with AdMobilize on AL analytics. “The first is to collect viewer data that can be analysed against the proof of play (media logs) and proof of display (device status data) that we collect and report within our platform. Adding proof of view completes the dataset, allowing them to [apply] POS sales data and other internal and external metrics (e.g., weather) to provide a deep insight into the impact of their digital signage network and content strategy.

“The second use case is using the data gathered to dynamically shape and schedule the media playing on the digital signage displays. In this scenario, the scheduled content is adjusted at the point of playback to optimize the content shown based on the insights gathered.”

For retailers and other businesses that use of digital signage, one longstanding challenge is quantifying the reach and effectiveness of both the displays’ locations and the content on them. AI enables them to get deeper, actionable insights that wouldn’t be practical or possible if humans did that analysis. Faster analysis also means businesses can react faster.

“One thing we tell all of our partners are to initially correlate the data to the brief or RFP that will drive the investment in digital signage in the first place,” says Mike Neel, AdMobilize global head of marketing/sales. “We often find that the data that can be provided greatly improves the KPIs associated with the investment in digital signage.

“Compelling content is by and large dictated by the old adage ‘right place, right time.’ Real-time data helps facilitate that.”