Trained on over 1 million hours of audio data, Nova-2 uses a Transformer-based automatic speech recognition (ASR) model architecture.
What does that deliver? The short answer: Accuracy. What kind of accuracy? As compared to Nova-1, Nova-2 offers:
OK, but what about the competition? On accuracy, Nova-2 outperforms its closest commercial competitor by 16.8% and beats OpenAI’s open-source ASR model, Whisper-large, on WER by a whopping 36.4%.
Those are just overall numbers though. Nova-2’s inference accuracy is especially good (at least vis-a-vis the competition) for phone call and media-parsing tasks.
Surely, you must be thinking, access to such a model will cost bazillions of dollars. Nope: Nova-2 pricing starts at just $0.0043 per minute.
To review: Deepgram’s Nova-2 model demonstrates state-of-the-art performance on inference accuracy, inference speed, and cost effectiveness across domain-specific SST tasks. But here’s the TLDR version of the rest:
🎯 Nova-2 delivers 18% higher accuracy than its predecessor.
🗣️ Nova-2 beats the competition on real-time transcription accuracy by 30%.
🏎️ Median inference time for 1 hour of audio is just 29.8 seconds.
💸 API pricing for Nova-2 kicks off at just $0.0043/minute
Long story short: Nova-2 is pretty nifty, and there’s a lot more to learn in the blog post linked above.
ASR has come a long way. From on-paper human transcription to rule-based mechanical operations, to where we’re at today: afforded with end-to-end deep learning models to automate the most boring of workflows.
But a tool is only as good as its specific use case. And at least when it comes to Language AI, rock-solid ASR is the foundational platform for an expanding universe of AI apps. Here, we highlight a series of use cases and showcase a company (or two) working to realize the potential of language AI in the enterprise and beyond.
🏗️ Building Infra for AI Builders
Any fan of flowcharts knows that one idea oftentimes logically connects to another.
But what's the middle point between intuition and code? For skilled programmers seeking out a prototyping tool, and up-and-coming folks who prefer connecting the dots to writing code, (and everyone else in between) there's AirOps.
A playground for AI newcomers on one side and a full-featured developer platform on the other, developers building with AirOps can visually define ongoing workflows and preview results as they're delivered. AirOps enables developers to specify, implement, and launch AI apps built with basic building blocks like search, inference, and projections through an interface that's as easy as drag-and-drop and filling in the blanks.
Backed by Altman FO Apollo Projects, alongside Founder Collective, and lead investor Wing Venture Capital, AirOps has raised at least $7 million in total funding so far.
😲 Computing Human Emotions
Ask someone how they're doing and the way they answer will tell you a lot. There's a world of difference between a bright and chipper "Oh, great!", a sardonic "Oh, greeeaaaat" accompanied by an eye roll, and a flat-sounding "Ohhh great." one says while staring blankly into the middle distance. On paper, they’re the same words. In practice, the meaning is how the message was delivered.
For most humans, picking up on these queues is instinctual. For computers, suffice it to say that emotional valence just doesn't compute. Hume AI is aiming to change that. The NYC-based startup is building a toolkit for developers to measure and understand human emotion. A single API gives developers access to several different AI models, from a voice model that can classify over 50 different emotions from speech data to multimodal model which incorporates voice and facial expression data to predict anger or excitation. The company has also been fairly transparent with their research, and it's compiled and published several open datasets, including a 400,000+ sample dataset of facial expressions and a 300,000+ sample set of vocal utterances expressing everything from distress and disgust to desire.
Hume AI was founded in March 2021 and has raised $17.7 million in funding to date from Union Square Ventures, Aegis Ventures, and Comcast Ventures, among others.
💹 Transforming Earnings Calls into Investment Intelligence
"Building in public" has become a bit of a meme in startup circles, but for public companies, it's not just an ethos; it's the law. On a quarterly basis, company leadership and their cadre of beancounters spill the beans about, among other things, how much money the company made and what their plans are. And it turns out there's a lot more to be learned from an earnings call than just numbers on a spreadsheet.
It's that vein of voice data that Quartr is mining. Straight outta Stockholm, the startup transcribes earnings calls en masse and compiles those transcripts—along with slide decks and earnings reports—into a searchable database. Ingesting reports from "roughly 8,000" public companies yields a lot of language data with which investment researchers can answer questions: Are retail executives using the word "inflation" more frequently now vs. last year? How frequently do semiconductor companies mention "geopolitical risk" and "Taiwan"? Are snack makers afraid of "GLP-1 agonists"? When did everyone start talking about "Large Language Models'' anyway?
Originally launched as a mobile app, Quartr is now also available on desktop, and the data service provider also launched an API to enable other financial application developers to integrate Quartr's data and live earnings call feeds into their apps. The startup has raised $7.1 million in seed capital since its inception in 2020; it's last reported funding round was in July 2022.
☎️ Removing Bias and Making Call Centers Better for Employees
Founded by Stanford engineers in honor of a friend, another Stanford engineer, Sanas is an accent translation technology that aims to remove the bias barrier for non-native local language speakers on phone calls and other technical modes of communication. Though there could be several applications for Sanas, the current target market is call centers and customer service departments. The company states that by removing the accent barrier, Sanas empowers employees to do their jobs with a psychological security that they will not be berated, harassed, or otherwise due to their accent.
Sanas currently supports accent neutralization to and from English-American, Australian, British, Filipino, and Spanish. Case studies showed that using the product improved customer understanding by 31% and improved customer satisfaction by 21% while protecting the mental health of the employees who choose to use it.
In June of 2023, Sanas announced a $32,000,000 Series A raise, the largest Series A round of any speech technology company to date, with participation from top venture capital firms like Insight Partners, General Catalyst, GV, and more.
🚑 In Emergency Response, Efficiency is Everything
The 2020-vintage company, Prepared, is an end-to-end AI solution developed for emergency 911 callers and emergency response teams. Emergency dispatch centers armed with Prepared Live can send a link to callers that allows callers to live stream video, communicate over text, and provide a precise GPS location. The enhanced version of Prepared Live, the paid subscription, offers language translation, external integrations, and more.
For first responders, the company developed Prepared Onscene, a product that integrates with Prepared Live, allowing the data provided by the caller to be transmitted to the responding units before they arrive. This allows firefighters, police, and emergency medical services to see fires, situations, or victims before they are physically present, allowing time for mental preparation for the work ahead. Unlike Prepared Live, there is no freemium model for Onscene and is only available through a subscription.
Lastly, Prepared AI is the company’s transcription, translation, and workflow automation product intended to assist 9-1-1 dispatchers in doing their jobs more efficiently in the wake of the labor shortage. To date, Prepared has raised over $11 million across two Seed rounds, with First Round Capital leading.
📝 Streamlining Studying for Success
Whiteboard is an AI-powered educational study tool geared towards college students. The platform has various features that are intended to help students study more efficiently by removing lecture fluff to focus on the materials that matter. The Whiteboard platform allows students to import their recorded lectures; it then uses an LLM to notate the lecture in clear and concise sections alongside the recording.
In addition to its “Video Chat” feature, Whiteboard offers enhanced notes on reading materials, flashcard development, and AI-powered tutoring. The AI tutoring collects applicable user materials and will point said users to the appropriate content that answers their inputted questions. The platform has several subscription options, including a free plan with some content limitations. Individual students can subscribe to Whiteboard; alternatively, it can be provided to students by their universities.