Spot ingannevole Iliad: l’IAP formalizza le richieste di modifica
Resident Evil 2 e 3 più Resident Evil 7 in arrivo su PS5 e Xbox Series X|S nel 2022
La parità di genere nei ruoli apicali delle aziende italiane

Intervista a Carola Salvato, Presidente di Global Women in PR Italy — Fra pochi giorni sarà l’otto marzo, la Giornata internazionale della donna, durante la quale ogni anno ricordiamo l’impegno…
L’articolo La parità di genere nei ruoli apicali delle aziende italiane scritto da Paolo Brambilla proviene da Assodigitale.
Nonna Cleme, il nuovo business model del Food nel cuore di Torino

Nonna Cleme. Una nuova idea di pizza a Torino Da oggi è ufficiale: una nuova idea di pizza è nata in città, un nuovo principio, un nuovo posto di ritrovo.…
L’articolo Nonna Cleme, il nuovo business model del Food nel cuore di Torino scritto da Paolo Brambilla proviene da Assodigitale.
OPPO Find X5 Pro: facile innamorarsene! La recensione
Google: da aprile si torna in ufficio
L’Ucraina consegnerà una “ricompensa” airdrop ai donatori di criptovalute
La giovane “spia” di Elon Musk ha un nuovo bot in grado di monitorare gli aerei russi
What World Hearing Day means for this Googler
Dimitri Kanevsky, a research scientist at Google with an extensive background in mathematics, knows the impact technology can have when built with accessibility in mind. Having lost his hearing in early childhood, he imagines a world where technology can make it easier for people who are deaf or hard of hearing to be a part of everyday, in-person conversations with hearing people. Whether it’s ordering coffee at a cafe, conversing with coworkers or checking out at the grocery store.
Dimitri has been turning that idea into a reality. He co-created Live Transcribe, our speech-to-text technology, which launched in 2019 and is now used daily by over a million people to communicate — including Dimitri. He works closely with the team to develop new and helpful features — like an offline mode that will be launching in the coming weeks to give people access to real-time captions even when Wi-Fi and data are unavailable.
For World Hearing Day, we talked with Dimitri about his work, why building for everyone matters and the future of accessible technology.
Tell us more about your background and job at Google.
When I moved to the U.S in 1984, there were no transcription services. I wanted to change that, so I focused my work on optimizing speech and language recognition to help people who are deaf or hard of hearing.
I eventually moved from academia to Google’s speech recognition team in 2014. The work my team and I accomplished allowed us to create practical applications — like Live Transcribe and Live Caption.
How has your personal experience shaped your career?
I completely lost my hearing when I was one. I learned to lipread well so I could communicate with other students and teachers. My family was also very helpful to me. When I switched to a school where my father taught, he made sure I was in a class with children I knew so it was a smoother transition.
But in eighth grade, I moved to a math school with new teachers and students and was unable to lipread what they taught in class or communicate with my new classmates. I sat, day after day, not understanding the material they were teaching and had to teach myself from textbooks. If I had a tool like Live Transcribe when I was growing up, my experience would have been very different.
In what ways has assistive technology — like Live Transcribe — changed your experience today?
Technology provides tremendous opportunities to help people with disabilities — I know this firsthand.
I use Live Transcribe every day to communicate with others. I use it to play games and share stories with my twin granddaughters — which is life-changing. And just last week, I gave a lecture at a mathematical seminar at John Hopkins University. During it, I could interact with the audience and answer questions — without Live Transcribe that would have been very difficult for me to do.
I used to rely heavily on lipreading for day-to-day tasks, but when people wear masks I can’t do that — I don’t even know when someone who’s wearing a mask is talking to me. Because of this, Live Transcribe is even more important to me — especially when at stores, riding public transit or visiting a doctor.
What are you excited about when you think about speech recognition technology ten years from now?
My dream is to use speech recognition technology to help people communicate. As technology advances, it will unlock new possibilities — such as transcribing speech even as people switch languages, understanding people with all accents and speech motor skills, indicating more sound events with visual symbols and automatically integrating sign recognition or additional haptic feedback technologies.
Further in the future, I hope to see an experience where people are no longer dependent on a mobile phone to see transcriptions. Perhaps transcriptions will be available in convenient wearable eye technologies or appear on a wall when someone looks at it. There’s a variant of prediction that there will be no mobile phones since all devices around us — like our walls — will act as mobile devices when people need them to.
What do you want others to learn from World Hearing Day?
According to WHO, one in ten people will experience hearing loss by 2050. Still, a lot of people with hearing loss don’t know about novel speech recognition technologies that could help them communicate, and hearing people aren’t aware of these tools.
World Hearing Day is an opportunity to make everybody aware of the needs of people with hearing loss and the technology that everyone can use to have a tremendous impact on their lives.
In aiuto all’Ucraina
Il supporto di Google.org
L’aggiornamento della Ricerca Google e di Maps in Ucraina
Espansione delle protezioni di sicurezza
Promuovere la qualità dell’informazione
In questa fase di crisi stiamo adottando misure straordinarie per fermare la diffusione di notizie false e interrompere le campagne di disinformazione online.
Aiutare i nostri colleghi in Ucraina
La gestione dei nostri servizi in Russia
Winter Big Bundle, sconti all’80%: piano Freelancer a 119$ anziché 1.562$
Machine learning can help read the language of life
DNA is the language of life: our DNA forms a living record of things that went well for our ancestors, and things that didn’t. DNA tells our body (and every other organism) which proteins to produce; these proteins are tiny machines that carry out enormous tasks, from fighting off infection to helping you ace an upcoming exam in school.
But for about a third of all proteins that all organisms produce, we just don’t know what they do. It’s kind of like we’re in a factory where everything’s buzzing, and we’re surrounded by all these impressive tools, but we have only a vague idea of what’s going on. Understanding how these tools operate, and how we can use them, is where we think machine learning can make a big difference.
An example of a previously-solved protein structure (E. coli TrpCF) and the area where our AI makes predictions of its function. This protein produces tryptophan, which is a chemical that’s required in your diet to keep your body and brain running.
Recently, DeepMind showed that AlphaFold can predict the shape of protein machinery with unprecedented accuracy. The shape of a protein provides very strong clues as to how the protein machinery can be used, but doesn’t completely solve this question. So we asked ourselves: can we predict what function a protein performs?
In our Nature Biotechnology article, we describe how neural networks can reliably reveal the function of this “dark matter” of the protein universe, outperforming state-of-the-art methods. We worked closely with internationally recognized experts at the European Bioinformatics Institute to annotate 6.8 million more protein regions in the Pfam v34.0 database release, a global repository for protein families and their function. These annotations exceed the expansion of the database over the last decade, and will enable the 2.5 million life-science researchers around the world to discover new antibodies, enzymes, foods, and therapeutics.

The Pfam database is a large collection of protein families and their sequences. Our ML models helped annotate 6.8 million more protein regions in the database.
We also understand there’s a reproducibility crisis in science, and we want to be part of the solution — not the problem. To make our research more accessible and useful, we’re excited to launch an interactive scientific article where you can play with our ML models — getting results in real time, all in your web browser, with no setup required.
Google has always set out to help organize the world’s information, and to make it useful to everyone. Equity in access to the appropriate technology and useful instruction for all scientists is an important part of this mission. This is why we’re committed to making these models useful and accessible. Because, who knows, one of these proteins could unlock the solution to antibiotic resistance, and it’s sitting right under our noses.