Posts

Starkey hearing aids

Starkey hearing aids, Stephen Neal Hearing

 

We sell and dispense all types of hearing aids at the Keynsham hearing centre run by Stephen Neal . One of the big guns is Starkey. Recently they announced a new innovation for their Live Ai hearing aids. It now comes with fall detection, meaning it will detect if you have a fall and if you are paired with your phone it will message up to 3 contacts  that you have specified before hand.

Read the full press statement bellow for more info, or if you are interested in knowing more please pop in or make an appointment so we can walk you through the new hearing aid and it’s features.

 

Stephen Neal hearing news:

Starkey Releases World’s First Hearing Aid with Fall Detection and Alerts to Livio AI Users

balance problems increase risk for fallsStarkey Hearing Technologies, Eden Prairie, Minn, has released its new Fall Detection and Alert feature in Livio AI hearing aids to a limited number of hearing professionals, and plans to offer the feature to all dispensing professionals and their clients in late February, according to CTO Achin Bhowmik, PhD, in an interview with Hearing Review on Tuesday, December 18. Using integrated sensors, the Fall Detection and Alert feature is designed to automatically detect falls and send messages to as many as three contacts.

Fall detection sensors are currently implemented in all Livio AI devices as part of its standard hardware platform, and Starkey has been working on the Fall Alert feature to maximize its utility for end users prior to the system’s widespread implementation.

Falls are a massive public health problem, particularly for older adults. It’s estimated that injuries due to falls will account for $67.7 billion in public health spending by 2020, and according to the National Council on the Aging (NCOA) falls are currently responsible for an older adult being admitted to a US emergency room every 11 seconds. Additionally, people with hearing loss are particularly susceptible to falls. A Johns Hopkins study suggests that having hearing loss triples the risk of falls for people age 40 and older—and the findings hold up regardless of whether their hearing loss is moderate or severe.

StarkeyFallsInfoGraphic

How Starkey Fall Detection and Alerts work. Starkey’s new Fall Detection system is said to have several benefits over existing stand-alone medical alert systems, which are typically attached to a lanyard around the neck. “The first key advantage is that a hearing aid is almost always in your ear during your active hours, making for one less thing to carry or remember. One of the major problems with medical alert systems is getting people to wear them,” says Bhowmik. “Second, we have two fall detection sensors [in binaural fittings] for the right side and the left side, whereas most fall detection systems have only one. And the way the two sensors are spaced apart and the way in which you hold your head, we can get better and more accurate results than neck-worn sensors designed to detect falls.”

Starkey CTO Achin Bhowmik spoke about the possibility of fall detection and other sensor-based capabilities at the 2018 Starkey Expo held in January.

Starkey CTO Achin Bhowmik spoke about the possibility of fall detection and other sensor-based capabilities at the 2018 Starkey Expo held in January.

According to Bhowmik, part of Starkey’s recent research has revolved around what constitutes an actual fall as opposed false-positives such as quick downward movements or even accidentally dropping the hearing aid. “If you take the hearing aid off your ear and drop it on the ground, you will not get a false-positive for falling with Livio AI,” says Bhowmik. “We have been working on [eliminating false-positives] for over a year. A good AI system is only as good as the data you train the system with. In this particular case, if the left hearing aid detects a fall, it immediately checks with the right hearing aid to see if the data matches what would indicate a fall for the system. Unless it detects a fall from the hearing aids in tandem for both the right and left sides of the head, the device will eliminate those non-fall events and false-positives.”

Starkey Livio AI hearing aid

Starkey Livio AI hearing aid.

The hearing care professional will be able to activate Fall Detection and Alerts through an easy-to-use interface within the fitting software for Starkey’s Livio AI hearing aids. The user can then enter the Auto Alert contacts—up to three people who are to be alerted in the event of a fall within the Thrive Hearing App. When a fall is detected by the system, an audio prompt asks the user if they have fallen. He or she then has 60 seconds to provide an Event Cancellation and stop the outgoing Fall Alert messages from being sent to their designated contacts. If the hearing aid user has fallen and elects to send the Fall Alert message to his/her contacts, they receive confirmation when each contact has been successfully reached.

The system also allows for a Manual Alert which can be activated by simply pressing the hearing aid button, sending an alert for a fall or non-fall related event. “Maybe you didn’t fall, but instead just felt dizzy or were otherwise forced to sit down on the floor,” explains Bhowmik. “Obviously, this is not a fall. But you can still use the Manual Alert to get help when you need it. By tapping a button, you can send an automatic alert to your contacts, telling them you need assistance.”

This is just another step in the direction of making the hearing aid a multi-purpose, multi-functional device, according to Starkey.

To learn more about Starkey’s Livio AI you can visit: https://www.starkey.com/hearing-aids/technologies/livio-artificial-intelligence-hearing-aids

 

Frome ear syringing available now!

Frome ear syringing available now!

If you are not as close to Keynsham as to Devizes we would recommend our sister company

Wiltshire ear clinic 

 

Frome ear syringing or ear wax removal. The Keynsham hearing centre is an independent hearing centre run by Stephen Neal and Anita Neal and are based in Keynsham near Bath.  Just a short drive and you can get an earlier appointment for your ear wax removal or your hearing test. Keynsham hearing are also a major Somerset centre for the latest DIGITAL hearing aids. If you are suffering with hearing loss and need impartial expert advice them please call Anita on reception to book your appointment to speak with Stephen.

Watch our ear wax removal video here.

 

Stephen Neal News:

 

A New Enhanced Operating System in Phonak Hearing Aids: AutoSense OS 3.0

Original story by The Hearing Review

Tech Topic | February 2019 Hearing Review

A review of the rationale for and enhanced features in AutoSense OS 3.0  with binaural signal processing, and how the new system is designed to achieve the most appropriate settings for the wearer, optimizing hearing performance in all listening environments, including media steaming.

It can be challenging to hear, understand, and actively engage in conversation in today’s fast-paced and “acoustically dynamic” world, especially for a listener with hearing loss. The Phonak automatic program has been designed to adapt seamlessly, based on the acoustic characteristics of the present environment and the benefit for clients.

Ear wax removal Frome

AutoSense OS™ 3.0 is the enhanced automatic operating system in Phonak Marvel™ hearing aids. It has been optimized to recognize additional sound environments for even more precise classification, applying dual path compression, vent loss compensation, and a new first-fit algorithm. In combination, these new enhancements to the Phonak automatic classification system ensure that the listener gains access to speech clarity and quality of sound irrespective of the environment, enabling them to actively participate in everyday life.

Optimal sound quality in every listening environment for listeners with hearing loss is always the goal of hearing aid manufacturers and hearing care professionals alike. As pointed out by MarkeTrak, “Hearing well in a variety of listening situations is rated as highly important to hearing aid wearers and has a direct impact on the satisfaction of hearing aid use throughout daily tasks and listening environments.”1

Without conscious effort, humans naturally classify audio signals throughout each day. For example, we recognize a voice on the telephone, or tell the difference between a telephone ring versus a doorbell ring. For the most part, this type of classification task does not pose a significant challenge; however, problems may arise when the sound is soft, when there is competing noise, or when the sounds are very similar in acoustical nature. Of course, these tasks become even more difficult in the presence of a hearing loss, and hence, great strides have been made in hearing instrument technology to incorporate classification capabilities within the automatic program.

Technology Evolution

In previous years, the sound processing of hearing aids was limited to a single amplification setting used for all situations. However, since the soundscape around us is dynamic—with frequent acoustical changes in the environment—it is unrealistic for a hearing aid with only one amplification setting to deliver maximum benefit in every environment. The evolution of hearing aids has seen the introduction of sound-cleaning features, such as noise cancellation, dereverberation, wind noise suppression, feedback cancellation, and directionality. These features offer maximum benefit to overall sound quality and speech intelligibility when they are appropriately applied, based on analysis of the sound environment.

Rather than having these sound-cleaning features permanently activated, their impact is greatest when they are applied selectively. For example, a wearer may not hear oncoming traffic if noise cancellation is permanently suppressing sound from all directions. Thus, defaults are set in the system for different environments.

Frome hearing aid centre

Of course, the possibility exists to add manual programs to accommodate acoustic characteristics of specific listening environments (eg, an “everyday” program with an omnidirectional microphone enabled and a “noise” program with a directional microphone enabled). However, having several manual programs increases the complexity for the hearing aid wearer. Research data shows the increasing preference of wearers for automatically adaptive sound settings over manual programs for different environments,and this is further confirmed by data-logging statistics which reveal a decline in manually added programs with the launch of newer technology platforms (Figure 1).3

Figure 1. Market research data from Phonak in 2017: Percentage of fittings with manual programs at 2nd session across hearing aid platforms Spice/Spice+, Quest, Venture, and Belong (n = 183,331).

Figure 1. Market research data from Phonak in 2017: Percentage of fittings with manual programs at 2nd session across hearing aid platforms Spice/Spice+, Quest, Venture, and Belong (n = 183,331).

Results of studies focusing specifically on speech intelligibility demonstrate that the majority of participants achieve a 20% improvement in speech understanding while listening in AutoSense OS than in a “preferred” manual program across a wide variety of listening environments, suggesting that manual programs may not always be appropriately or accurately selected.Even more interesting is the fact that users rate sound quality as being equal between the automatic and manual programs.According to this same research from Searchfield et al,a possible explanation may be that the practical application of selection relies on the wearer’s manual dexterity, normal cognition, noticeable benefit, and motivation levels. Furthermore, their research confirms a bias towards selection of the first program in the setup—whether or not this would be considered “audiologically” optimal.

Having an automatic program which can seamlessly adjust to select the most appropriate settings in any environment therefore saves both the client and the hearing care professional effort, time, and hassle.

First-generation AutoSense OS™

When Phonak AutoSense OS was originally developed, data from several sound scenes was recorded and used to “train” the system to identify acoustic characteristics and patterns. These characteristics include level differences, estimated signal-to-noise ratios  (SNRs), and synchrony of temporal onsets across frequency bands, as well as amplitude and spectrum information. Probabilities of the degree of match between “trained” versus “identified” acoustic parameters in real time are then calculated for the most optimal selection of sound settings in each environment. There are seven sound classes: Calm Situation, Speech in Noise, Speech in Loud Noise, Speech in Car, Comfort in Noise, Comfort in Echo, and Music. Three of the programs—Speech in Loud Noise, Music, and Speech in Car—are considered “exclusive classes” (ie, stand-alone) while the other four programs can be activated as a blend when it is not possible to define complex, real-world environments by one acoustic classification. For example, Comfort in Echo and Calm Situation can be blended with respect to how much each of these classifications are detected in the environment.

Enhanced Benefits for Wearers

With AutoSense OS 3.0, Phonak has gone a step further and incorporated data from even more sound scenes for the classes Calm Situation, Speech in Noise, and Noise into the training for additional system robustness. Enabling the desired signal processing is the goal of automatic classification, so to support the wearer’s understanding in speech-in- noise situations, the program Speech in Noise is activated even earlier than before.

Ear syringing Frome, Somerset

AutoSense OS 3.0 is the foundation for steering the signal processing and applying the most appropriate setting for the wearer based on the acoustics present in the environment. Refinements to the audiological settings within this are always sought to further enhance the user experience, and the improvements occur in different areas of the signal processing.

In order to maintain the natural modulations of speech in noise as well as streamed media, dual path compression is available and activated based on the listening environment. This allows temporal and spectral cues in speech to be more easily identified and used by the wearer.6

It is known that a full and rich sound is preferred by wearers while streaming audio, so the system enhances the sound quality of streamed audio signals by increasing the vent loss gain compensation. The result is an increase in low-frequency gain by up to 35 dB, which is especially beneficial to overcome the vent loss of a receiver-in-canal (RIC) hearing aid, most likely to be fitted with an open coupling (depending on the hearing loss and/or client comfort). This low-frequency “boost” is applied to streamed signals (or any other alternative input source, including a telecoil), while inputs received directly to the hearing aid microphones remain uncompromised, maintaining the frequency response of a Calm situation.

The Adaptive Phonak Digital (APD) algorithm has also been enhanced for spontaneous first-fit acceptance. The gain for first-time wearers fitted to an adaptation level of 80% has been softened for frequencies above 3000 Hz to reduce reported shrillness, but without compromising speech intelligibility. The desired effect of this is that the wearer experiences a comfortable and clear sound quality from the outset.7

New Classification of Media Signals 

Listening to music and enjoying it is achieved by an alternate setting that is used to attain optimal speech understanding. In an internal study conducted at the Phonak Audiology Research Center (PARC), participants emphasized their preferences for clarity of speech for dialogue-dominated sound samples and sound quality for music-dominated samples (C Jones, unpublished data, “Preferred settings for varying streaming media types,” 2017). This preference applies not only in the acoustic environment where signals reach the hearing instrument microphones directly, but also for streamed media inputs via the Phonak TV Connector or Bluetooth connection to a mobile device.

Phonak Audéo Marvel with AutoSense OS 3.0 now incorporates streamed inputs into the automatic classification process offering the wearer speech clarity as well as an optimal music experience. A recent study conducted at DELTA SenseLab in Denmark confirmed that the new Audéo Marvel, in combination with the TV Connector, is rated by wearers as close to their defined ideal profile of sound attributes for streamed media across a range of samples including, speech, speech in noise, music, and sport (Figure 2). The Audéo Marvel streaming solution was also rated among the top streaming solutions across 7 competitor solutions.This confirms that the way in which the classifier now categorizes streamed media into the sound classes “Speech” versus “Music” is yet another way in which the system provides ideal hearing performance for wearers in their everyday lives.

Figure 2. Sound attributes plot for Ideal profile (in gray) & AutoSense OS 3.0 in Phonak Audéo Marvel with TV Connector (in green).

Figure 2. Sound attributes plot for Ideal profile (in gray) & AutoSense OS 3.0 in Phonak Audéo Marvel with TV Connector (in green).

Binaural VoiceStream Technology

The Binaural VoiceStream Technology™ has been reintroduced within AutoSense OS 3.0. This technology facilitates binaural signal processing, such as binaural beamforming, and enables programs and features such as Speech in Loud Noise (when StereoZoom™ is activated), Speech in 360°, and DuoPhone. StereoZoom uses 4 wirelessly connected microphones to create a narrow beam towards the front, for access to speech in especially loud background noise. We know that the ability to stream the full audio bandwidth in real time and bidirectionally across both ears improves speech understanding and reduces listening effort in challenging listening situations.This reduction in listening effort, and consequently, memory effort, has been demonstrated in recent studies employing electrophysiological measures, such as electroencephalography (EEG), where significantly reduced Alpha-wave brain activity is noted when listening with StereoZoom compared to listening with more open approaches of directionality.10 When we consider this in terms of the “Limited Resources Theory” described in psychology by Kahneman11(ie, that the brain operates on a limited number of neural resources), it highlights that efficiencies in sensory processing, through use of such advanced signal processing, may serve to free up resources to benefit higher cognitive processing for the wearer.

Taking this a step further to look into behavioral patterns of speakers and listeners with hearing loss in a typical group communication scenario in the real world, methods such as video and communication analyses have been used effectively. Changes in behavior when listening with StereoZoom versus traditional fixed directional technologies have been compared and correlated with subjective ratings of listening effort. StereoZoom has been shown to increase communication participation by 15%, and decrease listening effort by 15% relative to the fixed directional condition.12

Summary

The ability of a hearing instrument to offer acceptable “hands-free” listening by automatically adapting to multiple situations increases the adoption rate of the instrument.The enhanced AutoSense OS 3.0, with binaural signal processing, achieves this by selecting the most appropriate settings for the wearer, optimizing hearing performance in all listening environments, and now during media streaming, too. The wearer is freed from expending energy on effortful listening and can focus their enjoyment instead on tasks which are more meaningful to them, confident in the knowledge that their hearing instruments will automatically take care of the rest.

Screen Shot 2019-01-21 at 11.35.38 AM


Correspondence
 can be addressed to Tania Rodrigues at: tania.rodrigues@phonak.com

Citation for this article: Rodrigues T. A new enhanced operating system in Phonak hearing aids: AutoSense OS 3.0. Hearing Review. 2019;26(2)[Feb]:22-26.

References 

  1. Kochkin S. MarkeTrak VIII: Consumer satisfaction with hearing aids is slowly increasing. Hear Jour. 2010;63(1):19-32.

  2. Rakita L; Phonak. AutoSense OS: Hearing well in every listening environment has never been easier. https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/white_paper/documents/insight_btb_autosense-os_belong_s3_028-1585.pdf Published August 2016.

  3. Überlacker E, Tchorz J, Latzel M. Automatic classification of acoustic situation versus manual selection. Hörakustik. 2015.

  4. Rakita L, Jones C. Performance and preference of an automatic hearing aid system in real-world listening environments. Hearing Review. 2015;22(12):28-34.

  5. Searchfield GD, Linford T, Kobayashi K, Crowhen D, Latzel M.  The performance of an automatic acoustic-based program classifier compared to hearing aid users’ manual selection of listening programs. Int J Audiol. 2017;57(3):201-212.

  6. Gatehouse S, Naylor G, Elberling C. Linear and nonlinear hearing aid fittings-1.Patterns of benefit. Int J Audiol. 2006;45(3):130–152.

  7. Jansen S, Woodward J; Phonak. Love at first sound: The new Phonak precalculation. https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/white_paper/documents/insight_btb_marvel_precalculation_season4_2018_028-1931.pdf. Published July 2018.

  8. Legarth S, Latzel M; Phonak. Benchmark evaluation of hearing aid media streamers. DELTA SenseLab, Force Technology. www.phonakpro.com/evidence

  9. Winneke A, Appell J, De Vos M, et al. Reduction of listening effort with binaural algorithms in hearing aids: An EEG study. Poster presented at: The 43rd Annual Scientific and Technology Conference of the American Auditory Society; March 3-5, 2016; Scottsdale, AZ.

  10. Winneke A, Latzel M, Appleton-Huber J; Phonak. Less listening- and memory effort in noisy situations with StereoZoom. https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/field_studies/documents/fsn_stereozoom_eeg_less_listening_effort.pdf. Published July 2018.

  11. Kahneman D. Attention and Effort.Englewood Cliffs, NJ: Prentice-Hall, Inc;1973.

  12. Schulte M, Meis M, Krüger M, Latzel M, Appleton-Huber J; Phonak. Significant increase in the amount of social interaction when using StereoZoom. https://www.phonakpro.com/content/dam/phonakpro/gc_hq/en/resources/evidence/field_studies/documents/fsn_increased_social_interaction_stereozoom_gb.pdf. Published September 2018.

 

 

Earwax removal Bristol and Bath

Ear wax removal Bristol and Bath by Stephen Neal

 

Out of hours earwax removal available weekly.

Brainwave Abnormality Could Be Common to Parkinson’s Disease, Tinnitus, Depression

Stephen Neal news update:

A brainwave abnormality could be a common link between Parkinson’s disease, neuropathic pain, tinnitus, and depression—a link that authors of a new study suggest could lead to treatment for all four conditions.

Dr Sven Vanneste, an associate professor in the School of Behavioral and Brain Sciences at The University of Texas at Dallas, is one of three authors of a paper in the journal Nature Communications regarding thalamocortical dysrhythmia (TCD), a theory that ties a disruption of brainwave activity to the symptoms of a wide range of neurological disorders, The University of Texas announced.

Dr Sven Vanneste, associate professor in the School of Behavioral and Brain Sciences.

Dr Sven Vanneste, associate professor in the School of Behavioral and Brain Sciences.

Vanneste and his colleagues—Dr Jae-Jin Song of South Korea’s Seoul National University and Dr Dirk De Ridder of New Zealand’s University of Otago—analyzed electroencephalograph (EEG) and functional brain mapping data from more than 500 people to create what Vanneste believes is the largest experimental evaluation of TCD, which was first proposed in a paper published in 1996.

“We fed all the data into the computer model, which picked up the brain signals that TCD says would predict if someone has a particular disorder,” Vanneste said. “Not only did the program provide the results TCD predicted, we also added a spatial feature to it. Depending on the disease, different areas of the brain become involved.”

The strength of our paper is that we have a large enough data sample to show that TCD could be an explanation for several neurological diseases.

Brainwaves are the rapid-fire rhythmic fluctuations of electric voltage between parts of the brain. The defining characteristics of TCD begin with a drop in brainwave frequency—from alpha waves to theta waves when the subject is at rest—in the thalamus, one of two regions of the brain that relays sensory impulses to the cerebral cortex, which then processes those impulses as touch, pain, or temperature.

A key property of alpha waves is to induce thalamic lateral inhibition, which means that specific neurons can quiet the activity of adjacent neurons. Slower theta waves lack this muting effect, leaving neighboring cells able to be more active. This activity level creates the characteristic abnormal rhythm of TCD.

“Because you have less input, the area surrounding these neurons becomes a halo of gamma hyperactivity that projects to the cortex, which is what we pick up in the brain mapping,” Vanneste said.

While the signature alpha reduction to theta is present in each disorder examined in the study—Parkinson’s, pain, tinnitus, and depression—the location of the anomaly indicates which disorder is occurring.

“If it’s in the auditory cortex, it’s going to be tinnitus; if it’s in the somatosensory cortex, it will be pain,” Vanneste explained. “If it’s in the motor cortex, it could be Parkinson’s; if it’s in deeper layers, it could be depression. In each case, the data show the exact same wavelength variation—that’s what these pathologies have in common. You always see the same pattern.”

EEG data from 541 subjects was used. About half were healthy control subjects, while the remainder were patients with tinnitus, chronic pain, Parkinson’s disease, or major depression. The scale and diversity of this study’s data set are what set it apart from prior research efforts.

“Over the past 20 years, there have been pain researchers observing a pattern for pain, or tinnitus researchers doing the same for tinnitus,” Vanneste said. “But no one combined the different disorders to say, ‘What’s the difference between these diseases in terms of brainwaves, and what do they have in common?’ The strength of our paper is that we have a large enough data sample to show that TCD could be an explanation for several neurological diseases.”

With these results in hand, the next step could be a treatment study based on vagus nerve stimulation—a therapy being pioneered by Vanneste and his colleagues at the Texas Biomedical Device Center at UT Dallas. A different follow-up study will examine a new range of psychiatric diseases to see if they could also be tied to TCD.

For now, Vanneste is glad to see this decades-old idea coming into focus.

“More and more people agree that something like thalamocortical dysrhythmia exists,” he said. “From here, we hope to stimulate specific brain areas involved in these diseases at alpha frequencies to normalize the brainwaves again. We have a rationale that we believe will make this type of therapy work.”

The research was funded by the National Research Foundation of Korea(NRF) and the Seoul National University Bundang Hospital.

Original Paper: Vanneste S, Song J-J, De Ridder D. Thalamocortical dysrhythmia detected by machine learning. Nature Communications. 2018;9(1103)

Source: Nature Communications, University of Texas at Dallas

Image: University of Texas at Dallas

http://www.ear-wax-removal.co.uk

http://www.keynshamhearing.co.uk

New digital hearing aids in Wiltshire

Signia Launches Silk Nx Hearing Aids In Wiltshire

Stephen Neal earwax removal in Wiltshire

If you are not as close to Keynsham as to Devizes we would recommend our sister company

Wiltshire ear clinic 

Signia Silk Nx

Audiology technology company Signia announced its latest innovation, the new Silk Nx hearing aids. Re-engineered to be 20% smaller than its predecessor, these ready-to-wear, completely-in-canal (CIC) devices now include key features of Signia’s Nx hearing aid technology that are designed to deliver the most natural hearing experience.

Signia Silk Nx

Signia Silk Nx

With the new Silk Nx solutions, hearing aid wearers do not have to sacrifice size for performance in their hearing aids. Despite having designed the already small Silk hearing aids to be even tinier with this new release, they are also more powerful than ever. The result is what Signia calls a “discreet, instant-fit hearing solution with the highest level of sound quality.”

A practically invisible solution

Many hearing aid wearers, and especially those being fit for the first time, are insecure about others seeing their hearing aids. The  Silk Nx were redesigned to be 20% smaller than previous models, according to Signia. As a result, they are designed for an improved fit rate and wearing comfort. They also feature darker faceplate colors that are designed to better blend into the ear canal and further decrease visibility.

Improved sound quality

Built upon Signia’s Nx technology platform, the new Silk is designed to provides wearers with the “most natural” hearing experience, according to the company. And Signia’s binaural beamforming technology is designed to allow clear speech understanding, even in noisy situations. Silk Nx hearing aids are also said to enable natural directionality and wireless streaming between both ears to make sure wearers hear what’s most important.

Signia Silk Nx

Signia Silk Nx

Instant-fit design

Silk hearing aids come ready-to-wear, with a secure fit for almost every ear. This is due to their super-soft and flexible silicone Click Sleeves, which are designed for a higher fit rate and are more durable than previous solutions.

More innovative features

The latest release also includes new features like TwinPhone, enabling wearers to put a phone up to one ear and hear the call through both hearing aids. They also represent what is said to be the “world’s first CIC solution” for single-sided deafness. With contralateral routing of signal (CROS) technology, Silk Nx hearing aids include wireless transmitters that transfer sound from the unaidable ear to the better ear, enabling the wearer to hear from both sides. Wearers also benefit from Signia’s apps, including the touchControl™ App and TeleCare™ 3.0, to provide greater control and convenience.

 Source: Signia

Images: Signia

www.keynshamhearing.co.uk

Radstock earwax removal

Oticon ConnectClip Wins 2018 Red Dot Award for Product Design

Radstock Somerset earwax removal service.

Red Dot logo 2018

Oticon ConnectClip has earned a 2018 Red Dot Award for product design, the Denmark-based hearing aid manufacturer announced. A panel of international jurors recognized ConnectClip for what was said to be “outstanding design aesthetics” that incorporated a variety of technical, performance, and functionality innovations. The intermediary device is the newest addition to the Oticon connectivity devices designed to improve Oticon Opn™users’ listening and communication experiences.  ConnectClip will be among the award-winning designs exhibited at Red Dot Design Museums around the world.

Oticon Logo

Commenting on the award win, Gary Rosenblum, president, Oticon, Inc said, “Oticon is honored to receive another prestigious Red Dot Award, this year for our new ConnectClip. This internationally recognized symbol of excellence is a testament not only to ConnectClip’s convenient, lifestyle-enhancing features, but also to the work that goes into the design and continued evolution of our Oticon Opn hearing aid, a 2017 Red Dot Award winner.”

The multi-functional ConnectClip is designed to turn Oticon Opn hearing aids into a high-quality wireless headset for clear, hands-free calls from mobile phones, including iPhone® and Android™ smartphones. Sound from the mobile phones is streamed directly to the hearing aids and ConnectClip’s directional microphones pick up the wearer’s voice. ConnectClip serves double duty as a remote/partner microphone, helping to provide improved intelligibility of the speaker wearing it, either at a distance (up to 65 feet), in very noisy environments or in a combination of the two. Opn wearers can also use ConnectClip as a remote control for their hearing aids.

Wearable Technology Award Win

Oticon also celebrates a win at the UK’s Wearable Technology and Digital Health Show Awards. Oticon Opn received the  Innovation Award for wearable originality and advancement. The win reflects votes by a combined method of professional jury and public website vote.

Organizers at the Wearable Technology and Digital Health Show Awards commented on the win: ”The judges felt that the Oticon solution presented a revolutionary approach to hearing loss, and that its technology presented a real opportunity for users to interact with the growing number of smart devices in the home. A worthy winner.”

Learn more about the expanded Oticon Opn family, ConnectClip and entire range of wireless connectivity accessories at www.Oticon.com/Connectivity.

 * Apple, the Apple logo, iPhone, iPad, iPod touch, and Apple Watch are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. Android, Google Play, and the Google Play logo are trademarks of Google Inc.

Source: Oticon

Images: Oticon, Red Dot

Frome & Somerset earwax removal

City, University of London to Pilot Language and Reading Intervention for Children

If you are not as close to Keynsham as to Devizes we would recommend our sister company

Wiltshire ear clinic 

http://www.dreamstime.com/royalty-free-stock-photos-school-kids-classroom-lesson-children-image31061178

Researchers from City, University of London have been awarded £97k ($USD approximately $136,479) from the Nuffield Foundation to pilot a language and reading intervention with 120 children in their first year of formal education, the school announced on its website.

Involving Dr Ros Herman, Professor Penny Roy, and Dr Fiona Kyle from the School of Health Science’s Division of Language and Communication Science, in collaboration with Professor Charles Hulme from Oxford University, the study—which is reportedly the first reading intervention study to include both deaf and hearing children—will trial the new intervention in primary schools for a year and compare outcomes with other schools that offer the standard literacy teaching.

The research team have shown in previous research that many severely and profoundly deaf children have significant reading delays, yet are typically excluded from reading intervention research.

In this new study, teachers will be trained to deliver the intervention program, comprising systematic phonics teaching alongside a structured vocabulary program, during the school literacy hour. The study will investigate whether all children, or only specific groups of children, benefit from the integrated program and whether a full-scale evaluation is merited.

Dr Herman said, “Our previous research has revealed the scale of reading difficulties among deaf children. Our findings suggest that deaf children will benefit from specialist literacy interventions such as those currently offered to hearing children with dyslexia. In addition, deaf children and many hearing children require ongoing support to develop the language skills that underlie literacy.

“As a result we hope our new study, which will pilot a combined language and reading intervention, will address these issues so that teachers can provide the vital support needed to prevent both hearing and deaf children from unnecessarily falling behind their peers.”

Source: City, University of London

You can also contact Stephen via www.ear-wax-removal.co.uk

GN Hearing Launches Rechargeable Battery Option for ReSound Linx 3D

Stephen Neal the earwax specialist for Bath, Bristol and the Somerset area.

image

GN Hearing—the medical device division of the GN Group—has introduced a rechargeable battery option for the ReSound LiNX 3D hearing aids, the company announced. The rechargeable battery solution, available in North America and other major markets from September 1, gives ReSound users more options to choose from. The rechargeable option is also available for Beltone Trust in North America, and from September 1, this will be extended to other major markets.

The rechargeable battery option is made available based on an understanding of user expectations as well as a commitment to empower users to choose the solution best suited for their needs and preferences. The announcement follows GN Hearing’s release of the innovative 5thgeneration 2.4 GHz wireless technology ReSound LiNX 3D hearing aids, which offer unmatched sound quality, an enhanced fitting experience, and comprehensive remote fine-turning, giving users a new hearing care experience, GN Hearing said.

According to the company, ReSound LiNX 3D rechargeable has all of the benefits of ReSound LiNX 3D, now combined with the all-day power of a rechargeable battery. With overnight charging, users will experience the advantage of all-day power, without the need to change batteries.

ReSound Linx 3D rechargeable accessory.

ReSound LiNX 3D rechargeable accessory.

“GN Hearing is pleased to provide yet another option for hearing aid users, built on our commitment to providing unmatched sound quality and user experience,” said Anders Hedegaard, president & CEO, GN Hearing. “This new rechargeable battery solution allows hearing care professionals to offer an additional option to their clients, and gives hearing aids users even more choices to tailor their hearing experience to their unique preferences,” he added.

Source: GN Hearing 

Image: GN Hearing 

Visual Cues May Help Amplify Sound, earwax can be the cause.

Visual Cues May Help Amplify Sound, University College London Researchers Find

Published on 

Ear wax removal, somerset, Wiltshire, Bath, Bristol, Norton St Philip, Glastonbury

Ear wax removal and hearing aids, Bath, Bristol, Frome, Glastonbury

Looking at someone’s lips is good for listening in noisy environments because it helps our brains amplify the sounds we’re hearing in time with what we’re seeing, finds a new University College London (UCL)-led study, the school announced on its website.

The researchers say their findings, published in Neuron, could be relevant to people with hearing aids or cochlear implants, as they tend to struggle hearing conversations in noisy places like a pub or restaurant.

The researchers found that visual information is integrated with auditory information at an earlier, more basic level than previously believed, independent of any conscious or attention-driven processes. When information from the eyes and ears is temporally coherent, the auditory cortex —the part of the brain responsible for interpreting what we hear—boosts the relevant sounds that tie in with what we’re looking at.

“While the auditory cortex is focused on processing sounds, roughly a quarter of its neurons respond to light—we helped discover that a decade ago, and we’ve been trying to figure out why that’s the case ever since,” said the study’s lead author, Dr Jennifer Bizley, UCL Ear Institute.

In a 2015 study, she and her team found that people can pick apart two different sounds more easily if the one they’re trying to focus on happens in time with a visual cue. For this latest study, the researchers presented the same auditory and visual stimuli to ferrets while recording their neural activity. When one of the auditory streams changed in amplitude in conjunction with changes in luminance of the visual stimulus, more of the neurons in the auditory cortex reacted to that sound.

“Looking at someone when they’re speaking doesn’t just help us hear because of our ability to recognize lip movements—we’ve shown it’s beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you’re trying to pick someone’s voice out of background noise, that could be really helpful,” said Bizley.

The researchers say their findings could help develop training strategies for people with hearing loss, as they have had early success in helping people tap into their brain’s ability to link up sound and sight. The findings could also help hearing aid and cochlear implant manufacturers develop smarter ways to amplify sound by linking it to the person’s gaze direction.

The paper adds to evidence that people who are having trouble hearing should get their eyes tested as well.

The study was led by Bizley and PhD student Huriye Atilgan, UCL Ear Institute, alongside researchers from UCL, the University of Rochester, and the University of Washington, and was funded by Wellcome, the Royal Society; the Biotechnology and Biological Sciences Research Council (BBSRC); Action on Hearing Loss; the National Institutes of Health (NIH), and the Hearing Health Foundation.

Original Paper: Atilgan H, Town SM, Wood KC, et al. Integration of visual information in auditory cortex promotes auditory scene analysis through multisensory binding. Neuron. 2018;97(3)[February]:640–655.e4. doi.org/10.1016/j.neuron.2017.12.03

Source: University College London, Neuron

Oticon Opn™ A new hearing aid.

Oticon Opn™ A new digital hearing aid.

 

Stephen Neal audiologist at the Keynsham Hearing Centre knows all about hearing aids and earwax removal using Microsuction and ear irrigation techniques, and shares the latest hearing aid from Oticon. Digital hearing instruments really are the latest option for a living in a digital world. Contact Stephen Neal to book an appointment at his Keynsham hearing centre. Stephen also does out of hours appointments too.

Hearing

The challenge of hearing clearly amidst background noise is a complaint hearing care professionals commonly encounter. Houston-based audiologist Jana Austin discusses how the Oticon Opn helped Bryan Caswell, a chef, manage the “tornado” of background noise coming at him from all directions in a busy restaurant environment. With its OpenSound Navigator and Spatial Sound LX working in tandem to identify sounds and manage noise, Caswell can hear a conversation from across the kitchen that he likens to a dart of sound that he’s catching. For Austin, the Opn reaffirms her ability to improve a patient’s quality of life.

Stephen Neal at Keynsham hearing centre near Bristol and Bath can help with supply and fit of this hearing aid or any other hearing aid on the market today. With Digital hearing instruments now so advanced you will be surprised on how yoru life can be totally changed within a few days of fitting.

Stephen Neal is a registered HCPC dispenser and works with all the large hearing aid/instrument manufacturers. With his expert advice and fitting, you will be surprised on how digital technology in the hearing world really has changed in recent years. Ask Stephen for a demo on how connecting with your smart phone, iPad and T.V. can transform your world.