8Flow.ai raises $6.6M to automate customer support workflows

Working in customer support often means navigating disjointed tools to find data and solve issues. However, many of these actions are routine and repeatable, making them ideal candidates for automation. 8Flow.ai, which is launching out of stealth today and announcing a $6.6 million seed funding round, wants to do exactly this.

The company is rolling out an enterprise-grade, self-learning workflow automation engine that integrates with tools like Zendesk, ServiceNow and Salesforce Service Cloud to assist agents in their daily tasks. But that’s only step one. The company then plans to use all of this data to train machine-learning models to generate AI-led workflows that are tailored to each user’s needs.

8Flow co-founders Josh Russ, Yev Goldin, Boaz Hecht Image Credits: 8flow

“Ultimately, where we are headed is that we’re building a tool that learns what an agent is doing today,” 8Flow co-founder and CEO Boaz Hecht told me. “But in the end, it really doesn’t matter if you’re a support agent, or you’re a finance person, or you’re an HR person, or whatever — the point is, we’re building an engine that learns in order to build a model that is then rich enough to be used.”

Right now, though, the team is still focused on phase one and on optimizing the user interface and experience. Hecht noted that this already helps out agents quite a bit and creates enough value for enterprises to pay for the product today. In addition, it also allows 8Flow to capture data and collect feedback.

“The idea is that we’re capturing data — we’re collecting the feedback — meaning we’re iterating for the agent on what they’re doing,” he explained. “Then, as they do that, we observe what they do and then we create — I almost think of it like building Lego bricks from the bottom up — we learn what bricks need to be built. Once we build the bricks, we learn how you put them together. And once you put them together, we see what you’ve built and we recreate those into pre-built workflows.”

Image Credits: 8flow

Currently, this takes the form of a Chrome extension that can automatically copy and paste relevant data from one application to another. The tool automatically learns common steps for each agent and then presents them as actions they can trigger with a single click.

As Hecht noted, fulfilling an order return, for example, may take dozens of clicks for a support agent, but it should really only be a handful because most steps are about copying and pasting data from one application to another. There are only a few steps where the agent has to make a decision (to issue the refund or not, for example) — and ideally, that’s what a tool like 8Flow allows those agents to focus on.

“8Flow has allowed my team to significantly increase efficiency and accuracy,” said Heather English, a senior customer support manager at FloorFound. “We save hours using 8Flow to eliminate the need to switch between browser tabs to copy and paste data. We’re excited about their roadmap ahead, as each release results in freeing up more valuable time for our team members.”

There’s also an additional benefit here for businesses, as they get data about which tools their agents actually use (and which they can cancel because they aren’t being used) and how they use them.

Image Credits: 8flow

Hecht is no stranger to workflow automation. He was also the co-founder and CEO of SkyGiraffe, an early enterprise mobility platform, which he then sold to ServiceNow in 2017. After that, he stayed at ServiceNow until March 2022, ultimately becoming the VP of Platform, leading the company’s teams that focused on its core platform, mobile, AI chatbots and most employee-facing products. By late 2021, he decided that he wanted to build another startup, though, taking former SkyGiraffe and ServiceNow colleagues Josh Russ and Yev Goldin with him to co-found the company.

While 8Flow’s focus right now is on support agents, largely because that’s the market the founding team knows best, there’s no reason it couldn’t add just as much value to businesses in other verticals as well.

8Flow‘s seed round was led by Caffeinated Capital, with BoxGroup, Liquid2, HNVR and Trilogy also joining the round. A number of prominent angel investors, including former GitHub CEO Nat Friedman, Airtable co-founder and CEO Howie Liu, and Snowflake CFO Michael Scarpelli, also participated.

8Flow.ai raises $6.6M to automate customer support workflows by Frederic Lardinois originally published on TechCrunch

ChatGPT’s new app comes out of the gate hot, tops half a million installs in first 6 days

Despite being U.S.- and iOS-only ahead of today’s expansion to 11 more global markets, OpenAI’s ChatGPT app has been off to a stellar start. The app has already surpassed half a million downloads in its first six days since launch, according to a new analysis by app intelligence provider data.ai. That ranks it as one of the highest-performing new app releases across both this year and the last, topped only by the February 2022 arrival of the Trump-backed Twitter clone, Truth Social.

As consumer demand for AI chatbots heated up, other third-party apps calling themselves “ChatGPT” or “AI chatbot” have filled the App Store. While many of these were essentially fleeceware, trying to trick consumers into paying for expensive subscriptions to access their AI, a combined group of top apps still managed to pull in millions in consumer spending. This competitive landscape among AI chatbot apps could have created a tougher market for an official ChatGPT app to gain traction. But as it turns out, that was not the case.

OpenAI’s ChatGPT app outperformed most of its rivals, including other popular AI and chatbot apps as well as Microsoft’s apps, Bing and Edge, which offered some of the first significant third-party integrations of OpenAI’s GPT-4 technology.

Though Bing and Microsoft Edge certainly benefitted from the interest in ChatGPT at their debut, seeing a respective 340K and 335K downloads across iOS and Android in their best five-day periods in February, OpenAI’s ChatGPT app easily topped them, generating 480,000 installs in the first five days of its U.S. launch, when the app was iOS-only.

Compared with just Bing and Edge’s iOS downloads alone, ChatGPT was even further ahead with its 480K installs versus Bing’s 250K and Edge’s 195K.

Image Credits: data.ai

However, Bing and Edge were still ahead of ChatGPT when looking at all U.S. downloads in May across both app stores — but not when comparing only iOS installs for the month. That indicates ChatGPT may soon pull ahead of these search-focused alternatives.

Image Credits, above and below: data.ai

Data.ai’s analysis also found the app outperformed other top AI chatbot apps in the U.S., many of which were generically named in order to capitalize on consumer searches for keywords like “AI” and “chatbot” on the App Store. Here, OpenAI’s ChatGPT found itself in the top five by downloads, when ranked against other apps’ best five-day periods in 2023 across the App Store and Google Play.

The only app to beat it was “Chat with Ask AI,” which saw 590,000 installs from April 4-8, 2023, compared with ChatGPT’s 480,000 installs from May 18-22, the data indicates.

Image Credits: data.ai

Though it’s only been available for a week, ChatGPT is also already ranking in the top five among AI chatbot apps by downloads in the U.S. in May 2023. At the time of data.ai’s number crunching, it compared ChatGPT and other top chatbot apps for the month of May through the 23rd — so, technically, less than a week since ChatGPT’s launch.

By then, the app had seen 550,000 downloads, tying with Genie – AI Chatbot, the next nearest ranked AI chatbot app by May downloads on the U.S. App Store. A few others were further ahead, however, including ChatOn — AI Chat Bot Assistant (610K installs), AI Chatbot – Nova (680K installs), and Chat with Ask AI (1.4M installs). Still, given how quickly ChatGPT was able to top half a million installs, it may soon beat these rivals.

Image Credits: data.ai

In addition, ChatGPT had one of the best new app debuts this year — and in 2022, data.ai found.

In ChatGPT’s first five days of U.S. iOS downloads post-launch, it generated 480,000 installs, which ranked it as the No. 2 biggest app launch, behind Truth Social, which saw 630,000 downloads. The next largest debuts (i.e, the first 5 days post-launch) included the March 2023 arrival of Widgetable: Lock Screen Widget (360,000 installs), and the 2022 launches of MyNBA2K23 (310,000 installs) and sendtit – Q&A on Instagram (260,000 installs).

This also put ChatGPT in the >99.99th percentile for new app launches in the U.S. since 2022 on iOS.

Data.ai notes that only the top 1% of apps generated more than 10,600 U.S. downloads in their first five days, and the top 0.1% had more than 45,000. Its analysis included data for roughly 39,000 apps that launched in the U.S. on iOS since the start of 2022 and then ranked in the top charts at some point over this period. (The data doesn’t include Apple’s first-party apps, like Apple Music Classical).

Image Credits: data.ai

Of course, installs are only one way of measuring consumer demand and are not as reliable as analyzing how many people then signed up and became active app users.

However, because ChatGPT is still so new, data.ai won’t yet have accurate estimates on metrics like daily or monthly active users, it says — that may take another few weeks to generate.

ChatGPT’s new app comes out of the gate hot, tops half a million installs in first 6 days by Sarah Perez originally published on TechCrunch

TikTok is testing an in-app AI chatbot called ‘Tako’

AI chatbots, like ChatGPT, are all the rage, so it’s no surprise to learn that TikTok is now testing its own AI chatbot, as well. Called “Tako,” the bot is in limited testing in select markets, where it will appear on the right-hand side of the TikTok interface, above the user’s profile and other buttons for likes, comments and bookmarks. When tapped, users can ask Tako various questions about the video using natural language queries or discover new content by asking for recommendations.

For instance, when watching a video of King Charles’ coronation, Tako might suggest that users ask “What is the significance of King Charles III’s coronation?”

Or, if users were looking for ideas of something to watch, they could ask Tako to suggest some videos on a particular topic — like funny pet videos. The bot would respond with a list of results that include the video’s name, author and subject, as well as links to suggested videos. From here, you could click on a video’s thumbnail to be directed to the content.

Image Credits: TikTok screenshot by Watchful.ai

The bot was discovered being publicly tested by app intelligence firm Watchful.ai, and TikTok confirmed the tests are now live.

“Being at the forefront of innovation is core to building the TikTok experience, and we’re always exploring new technologies that add value to our community,” a TikTok spokesman told TechCrunch. “In select markets, we’re testing new ways to power search and discovery on TikTok, and we look forward to learning from our community as we continue to create a safe place that entertains, inspires creativity and drives culture.”

However, though Watchful.ai says it found the AI chatbot in tests on iOS devices in the U.S., TikTok says the current version of the bot is not currently public in the U.S., but it is being tested in other global markets, including an early limited test in the Philippines.

We also understand the bot will not appear on minors’ accounts.

Behind the scenes, TikTok is leveraging an unknown third-party AI provider that TikTok has customized for its needs. That modification does not include the use of any in-house AI technologies from TikTok or parent company ByteDance.

Upon first launch, TikTok advises users in a pop-up message that Tako is still considered “experimental” and its feedback “may not be true or accurate” — a disclaimer that applies to all modern AI chatbots, including OpenAI’s ChatGPT and Google’s AI, among others. TikTok also stresses that the chatbot should not be relied on for medical, legal or financial advice. (We understand the wording in the image below may reflect an earlier version of the bot rather than the current tests.)

Image Credits: TikTok screenshot by Watchful.ai

The disclosure also notes that all Tako conversations will be reviewed for safety purposes and, vaguely, to “enhance your experience.” This is one of the complications that come with using modern AI chatbots, unfortunately. Because the technologies are so new, companies are opting to log customer interactions and review them to help their bots improve. But from a privacy standpoint, that means the AI conversations are not being deleted after chats end, which poses potential risks.

Some companies have worked around this consumer privacy concern by allowing users to delete their chats manually, as Snap has done with its My AI chatbot companion in the Snapchat app. TikTok is taking a similar approach with Tako, as it also allows users to delete their chats.

It’s unclear if the AI chatbot is logging data associated with the user’s name or other personal information, though. The long-term data retention policies or privacy aspects of the chatbot also couldn’t be determined at this time.

Image Credits: TikTok screenshot by Watchful.ai

The security risks of AI chatbots have led some companies to ban such bots at work, including Apple, which has gone so far as to restrict employees from using tools like OpenAI’s ChatGPT or Microsoft-owned GitHub’s Copilot over concerns about confidential data being leaked. Others who have recently enacted similar bans include banks like Bank of America, Citi, Deutsche Bank, Goldman Sachs, Wells Fargo and JPMorgan, as well as Walmart, Samsung and telecom giant Verizon.

Why consumers would even want an AI chatbot in TikTok is another matter.

While most companies are experimenting with AI in some way, shape or form, TikTok believes the chatbot could do more than just answer questions about a video — it could also become a different way for users to surface content in the app, beyond typing into a search box.

This could become a threat to Google if TikTok’s tests were successful and the chatbot publicly rolled out, given that Google has already noted how Gen Z are turning to TikTok and Instagram as the first place they go to search on certain subjects. Soon, Google will begin rolling out a conversational experience in search, but if TikTok had its own in-app AI chatbot, that could encourage younger users to bypass Google altogether.

Update, 5/25/23, 9 AM ET: At the time of publication, TikTok shared additional information about Tako on its Twitter account. We’ve updated with additional details, where relevant.

TikTok is testing an in-app AI chatbot called ‘Tako’ by Sarah Perez originally published on TechCrunch

Audio journalism app Curio can now create personalized episodes using AI

Curio, a startup building a platform that turns expert journalism into professionally narrated content, is embracing AI technology to create customized audio episodes, based on your prompts. The company today already has a large catalog of high-quality journalism licensed from partners like The Wall St. Journal, The Guardian, The Atlantic, The Washington Post, Bloomberg, New York Magainze, and others, which it leveraged to train its AI model, powered by OpenAI technologies. This allows Curio users to now ask its new AI helper, “Rio,” a question they want to learn more about, then have it return a bespoke audio episode that includes only fact-checked content — not AI “hallucinations.”

The company is also today announcing an additional strategic investment from the head of TED, Chris Anderson, a prior investor in Curio’s Series A round. Ahead of this, Curio had raised over $15 million from investors including Earlybird, Draper Esprit, Cherry Ventures, Horizons Ventures, 500 Startups, and others.

Anderson’s new contribution amount is not being disclosed, but Curio says he’s a “significant investor.”

Founded in 2016 by ex-BBC strategist Govind Balakrishnan and London lawyer Srikant Chakravarti, Curio’s original concept was to offer a subscription-based service that provides access to a curated library of journalism translated into audio. To do so, the company partnered with dozens of media organizations to license their content, which is then narrated by voice actors and added to the Curio app. The experience is an improvement over the news audio offerings provided by services like Pocket, where users save articles to listen to later, as Curio’s content is read by real people, not robotic-sounding AI voices.

With the addition of its AI feature, Curio is now able to curate custom audio as well, on top of its hand-picked selection of audio journalism. The company believes this could become a powerful use case for AI at a time when there are legitimate concerns about AI chatbots providing false information or making up facts when they don’t know how to generate the right answer — something that’s called a “hallucination.” Already, we’ve seen falsehoods provided by AI chatbots when both Google and Microsoft demonstrated their new AI search tools, for instance.

Curio’s AI, on the other hand, won’t return anything it “makes” up, as it’s combining audio clips from across its catalog in response to users’ queries, effectively creating mini podcast episodes that allow you to explore a topic through quality, fact-checked journalism.

The company suggests you could use the AI feature via prompts like, “tell me about the possibility of peace in Ukraine,” “what is the future of food?,” “tell me about the U.S. debt ceiling,” “tell me why Vermeer is so great,” or “I have 40 minutes, update me on AI,” for example.

Image Credits: Curio screenshot on web

However, the AI can’t return information on breaking news, as it takes time for it to translate news articles into narrated audio. But could be used to explore various topics in more detail.

“We are trying to create from, a technical perspective, an AI that doesn’t hallucinate,” explains Curio’s Chief Marketing Officer, Gastón Tourn. “And the second thing that is interesting is this idea of unlocking knowledge from journalism — from news — because when you ask questions, it actually also proposes articles from, maybe from a few years ago, but they’re still super relevant to what’s going on right now.”

In addition to the media brands mentioned above, Curio also has relationships with The Economist, FT, WIRED, Vox, Vulture, Scientific American, Fast Company, Salon, Aeon, Bloomberg Businessweek, Foreign Policy, The Cut, and others — in total, over 30 publications are supported. (The New York Times, we should note, is not one of them. And the company launched its own audio journalism app today, as it turns out.)

To get started with the new Curio AI, you’ll type your question or prompt into the box provided, as if you were interacting with an AI chatbot, like ChatGPT. (Curio relies on OpenAI’s GPT 3.5 model, we understand.) This feature is available both on the web and in Curio’s mobile apps.

To create the personalized audio episode for you, Curio crunches through over 5,000 hours of audio, but this all takes just a few moments of processing from the user’s perspective. This results in a custom audio episode that includes an introduction along with two articles from Curio’s publications.

Curio itself is a premium subscription service priced at $24.99 per month (or $14.99/mo if paying for a year upfront). However, the AI feature is free to use, for the time being. The company says that’s because it wants to get “Rio” into the hands of as many people as possible, so it can learn. For instance, it’s looking to understand what length users prefer for these personalized episodes, though right now it’s leaning toward shorter articles.

Later, Curio may add more features — like the ability to share your episodes with others or get suggestions based on what other users are asking about.

“We don’t see AI as a curation tool,” notes Tourn. “We see it more as a discovery tool. We think what AI does is unearth content that is super interesting and finds ways to relate to it, but the curation is still human and the voices are still human.”

The company today has a smaller customer base of over a thousand subscribers, and a million-plus app downloads, but the AI addition may prompt the app to gain more traction as users explore this unique use case for AI.

Audio journalism app Curio can now create personalized episodes using AI by Sarah Perez originally published on TechCrunch

Amazon refreshes its Echo lineup, adds a Wi-Fi extender and smart speaker combo, Echo Pop

After last fall’s Amazon hardware event which brought us a handful of new Echo devices, like the Dot with the clock and other minor updates, Amazon today is rolling out a broader refresh of its Echo lineup, which includes a new form factor with the arrival of the Echo Pop. While the current Dot is more of a rounded, bubble-shaped Echo, the Pop is a semi-circle and comes in teen-friendly colors like lavender and teal, while also serving to extend your home’s Wi-Fi network thanks to eero.

Alongside the launch, Amazon also unveiled other updated Echo devices, including the Echo Show 5, Echo Buds, and Echo Auto, and offered more context about how it sees AI impacting Alexa’s future.

Until recently, Alexa’s future was seemingly uncertain, given reports of the billions of dollars the division has lost, its failures to inspire voice shopping, and, later, the larger cost-cutting measures at the retail giant, which included layoffs in Amazon’s devices group.

However, SVP of Alexa Rohit Prasad downplays the impact those cuts had on Alexa, telling TechCrunch that of the 2,000 people let go within Amazon’s Devices and Services organization, only “a fraction of those were in Alexa,” he says. “Contrary to some of the things written, it was very small in context,” Prasad argues. “In terms of our roadmap and our conviction, Alexa is one of the biggest investments at Amazon and our conviction has only grown — especially in this time of how exciting AI is and what can be a quantitatively different, better, and more useful Alexa for our customers.”

Or, reading between the lines, Alexa’s AI potential outpaces that of the devices where it currently lives, like the Echo speakers, given its ability to extend its learnings elsewhere inside Amazon.

“I’m very optimistic that…the AI advances will be massive, but we are actually contributing to the Amazon businesses,” Prasad adds. “And I believe that Alexa is well on its trajectory to be that personal AI — which will also be a successful business for us.”

As for the products themselves, the new $39.99 Echo Pop, which could seemingly eat into the Dot’s market share, has a front-facing directional speaker and is powered by the  Amazon AZ2 Neural Edge processor. It also comes eero Built-in, allowing the device to add up to 1,000 square feet of coverage to an existing eero Wi-Fi network, giving the device a dual purpose.

In addition to “Lavender Bloom” and “Midnight Teal,” the Pop comes in black and white.

Amazon says the Pop is in addition to, and not a replacement for, its existing Dot lineup, and the Dot and Dot with Clock remain available.

Image Credits: Amazon

Meanwhile, the $89.99 Echo Show 5 and $99.99 Echo Show 5 Kids edition both got an upgrade that makes them now 20% faster than the prior generation. Both also include a re-engineered microphone array, a faster AZ2 Neural Edge processor, and an upgraded speaker system that promises a doubling of the bass and clearer sound.

And they now support the new smart home standard Matter, as does the new Echo Pop.

The kids’ device — available in the U.S., U.K., and Germany — is slightly more expensive as it includes a year of the subscription service Amazon Kids+, offering ad-free and age-appropriate apps, games, Alexa skills, and audiobooks, plus other kid-friendly features like the AI-powered story making tool, Create with Alexa. The latter has two new themes: Dinosaurs and Jazzy Jungle.

Image Credits: Amazon

Amazon’s $49.99 Alexa earbuds, Echo Buds, are also being updated with richer sound via a 12mm dynamic driver, improved clarity, and a longer-lasting battery.

The new Buds get up to 5 hours of continuous playback, and up to 20 total hours of listening from a full case charge — the latter is up from 15 hours, in the prior generation. The earbuds also feature two microphones and a voice detection accelerometer, customizable tap controls, a VIP filter, and multipoint pairing.

Image Credits: Amazon

The existing Echo Auto for vehicles is also now being made available to customers in Australia, Canada, the United Kingdom, Germany, France, Italy, Spain, and Japan.

Image Credits: Amazon

As for Alexa, Amazon previously shared some of its thoughts about its smart assistant’s future during last month’s first-quarter earnings call with investors. Here, CEO Andy Jassy spoke of the company’s work to build a more “generalized and capable” large language model (LLM) to power Alexa and said the new LLM would help Amazon work towards its goal of building the best personal assistant — not just a smart speaker.

Already, many of Alexa’s experiences have been powered by a large 20 billion-parameter language model with encoder-decoder architecture, which it says is the biggest encoder-decoder model ever built. And with the introduction of Transformer-based large-scale multilingual models, the Alexa Teacher Model (or AlexaTM as it’s called), can transfer what it knows to another language without human supervision, helping Alexa get smarter, faster. The model is also used in conversational skills like Create with Alexa, where kids and Alexa come up with stories together.

Still, despite these AI advances, most customers are still using their Echo devices for basic tasks, like controlling their smart home or getting updates about their Amazon orders. Though sales of Alexa-enabled devices have now topped 500 million, they haven’t helped Amazon sell more merchandise, reports found. But Prasad says Alexa usage is still growing, with Alexa interactions up 35% year-over-year. People using the assistant to get information is up by more than 50% year-over-year, he notes.

As for what comes next, the exec only offers the broadest of hints.

The goal, he says, is for Alexa to provide better answers, but also those that are grounded in facts — a concern with modern AI. Plus, Alexa’s answers should be personalized to the end user. For example, if you told Alexa you felt hot, it should offer to turn your thermostat down, not suggest you go to the beach to cool off.

“We already have announced the Bedrock service which is for enterprise use cases,” Prasad says, referring to AWS’ new tools for building with generative AI. “And then, for Alexa — for the consumer use cases, like powering all the conversational experiences on Alexa — you’ll find qualitatively different elements of experiences that we’ll be launching along the year,” he teases.

Amazon refreshes its Echo lineup, adds a Wi-Fi extender and smart speaker combo, Echo Pop by Sarah Perez originally published on TechCrunch

US tech policy must keep pace with AI innovation

As innovation in artificial intelligence (AI) outpaces news cycles and grabs public attention, a framework for its responsible and ethical development and use has become increasingly critical to ensuring that this unprecedented technology wave reaches its full potential as a positive contribution to economic and societal progress.

The European Union has already been working to enact laws around responsible AI; I shared my thoughts on those initiatives nearly two years ago. Then, the AI Act, as it is known, was “an objective and measured approach to innovation and societal considerations.” Today, leaders of technology businesses and the United States government are coming together to map out a unified vision for responsible AI.

The power of generative AI

OpenAI’s release of ChatGPT captured the imagination of technology innovators, business leaders and the public last year, and consumer interest and understanding of the capabilities of generative AI exploded. However, with artificial intelligence becoming mainstream, including as a political issue, and humans’ propensity to experiment and test systems, the ability for misinformation, impact on privacy and the risk to cybersecurity and fraudulent behavior run the risk of quickly becoming an afterthought.

In an early effort to address these potential challenges and ensure responsible AI innovation that protects Americans’ rights and safety, the White House has announced new actions to promote responsible AI.

In a fact sheet released by the White House last week, the Biden-Harris administration outlined three actions to “promote responsible American innovation in artificial intelligence (AI) and protect people’s rights and safety.” These include:

  • New investments to power responsible American AI R&D.
  • Public assessments of existing generative AI systems.
  • Policies to ensure the U.S. Government is leading by example in mitigating AI risks and harnessing AI opportunities.

New investments

Regarding new investments, The National Science Foundation’s $140 million in funding to launch seven new National AI Research Institutes pales in comparison to what has been raised by private companies.

While directionally correct, the U.S. Government’s investment in AI broadly is microscopic compared to other countries’ government investments, namely China, which started investments in 2017. An immediate opportunity exists to amplify the impact of investment through academic partnerships for workforce development and research. The government should fund AI centers alongside academic and corporate institutions already at the forefront of AI research and development, driving innovation and creating new opportunities for businesses with the power of AI.

The collaborations between AI centers and top academic institutions, such as MIT’s Schwarzman College and Northeastern’s Institute for Experiential AI, help to bridge the gap between theory and practical application by bringing together experts from academic, industry and government to collaborate on cutting-edge research and development projects that have real-world applications. By partnering with major enterprises, these centers can help companies better integrate AI into their operations, improving efficiency, cost savings and better consumer outcomes.

Additionally, these centers help to educate the next generation of AI experts by providing students with access to state-of-the-art technology, hands-on experience with real-world projects and mentorship from industry leaders. By taking a proactive and collaborative approach to AI, the U.S. government can help shape a future in which AI enhances, rather than replaces, human work. As a result, all members of society can benefit from the opportunities created by this powerful technology.

Public assessments

Model assessment is critical to ensuring that AI models are accurate, reliable and bias-free, essential for successful deployment in real-world applications. For example, imagine an urban planning use case in which generative AI is trained on redlined cities with historically underrepresented poor populations. Unfortunately, it is just going to lead to more of the same. The same goes for bias in lending, as more financial institutions are using AI algorithms to make lending decisions.

If these algorithms are trained on data discriminatory against certain demographic groups, they may unfairly deny loans to those groups, leading to economic and social disparities. Although these are just a few examples of bias in AI, this must stay top of mind regardless of how quickly new AI technologies and techniques are being developed and deployed.

To combat bias in AI, the administration has announced a new opportunity for model assessment at the DEFCON 31 AI Village, a forum for researchers, practitioners and enthusiasts to come together and explore the latest advances in artificial intelligence and machine learning. The model assessment is a collaborative initiative with some of the key players in the space, including Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI and Stability AI, leveraging a platform offered by Scale AI.

In addition, it will measure how the models align with the principles and practices outlined in the Biden-Harris administration’s Blueprint for an AI Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. This is a positive development whereby the administration is directly engaging with enterprises and capitalizing on the expertise of technical leaders in the space, which have become corporate AI labs.

Government policies

With respect to the third action regarding policies to ensure the U.S. government is leading by example in mitigating AI risks and harnessing AI opportunities, the Office of Management and Budget is to draft policy guidance on the use of AI systems by the U.S. Government for public comment. Again, no timeline or details for these policies has been given, but an executive order on racial equity issued earlier this year is expected to be at the forefront.

The executive order includes a provision directing government agencies to use AI and automated systems in a manner that advances equity. For these policies to have a meaningful impact, they must include incentives and repercussions; they cannot merely be optional guidance. For example, NIST standards for security are effective requirements for deployment by most governmental bodies. Failure to adhere to them is, at minimum, incredibly embarrassing for the individuals involved and grounds for personnel action in some parts of the government. Governmental AI policies, as part of NIST or otherwise, must be comparable to be effective.

Additionally, the cost of adhering to such regulations must not be an obstacle to startup-driven innovation. For instance, what can be achieved in a framework for which cost to regulatory compliance scales with the size of the business? Finally, as the government becomes a significant buyer of AI platforms and tools, it is paramount that its policies become the guiding principle for building such tools. Make adherence to this guidance a literal, or even effective, requirement for purchase (e.g., The FedRamp security standard), and these policies can move the needle.

As generative AI systems become more powerful and widespread, it is essential for all stakeholders — including founders, operators, investors, technologists, consumers and regulators — to be thoughtful and intentional in pursuing and engaging with these technologies. While generative AI and AI more broadly have the potential to revolutionize industries and create new opportunities, it also poses significant challenges, particularly around issues of bias, privacy and ethical considerations.

Therefore, all stakeholders must prioritize transparency, accountability and collaboration to ensure that AI is developed and used responsibly and beneficially. This means investing in ethical AI research and development, engaging with diverse perspectives and communities, and establishing clear guidelines and regulations for developing and deploying these technologies.

US tech policy must keep pace with AI innovation by Walter Thompson originally published on TechCrunch

AI2 is developing a large language model optimized for science

PaLM 2. GPT-4. The list of text-generating AI practically grows by the day.

Most of these models are walled behind APIs, making it impossible for researchers to see exactly what makes them tick. But increasingly, community efforts are yielding open source AI that’s as sophisticated, if not more so, than their commercial counterparts.

The latest of these efforts is the Open Language Model, a large language model set to be released by the nonprofit Allen Institute for AI Research (AI2) sometime in 2024. Open Language Model, or OLMo for short, is being developed in collaboration with AMD and the Large Unified Modern Infrastructure consortium, which provides supercomputing power for training and education, as well as Surge AI and MosaicML (which are providing data and training code).

“The research and technology communities need access to open language models to advance this science,” Hanna Hajishirzi, the senior director of NLP research at AI2, told TechCrunch in an email interview. “With OLMo, we are working to close the gap between public and private research capabilities and knowledge by building a competitive language model.”

One might wonder — including this reporter — why AI2 felt the need to develop an open language model when there’s already several to choose from (see Bloom, Meta’s LLaMA, etc.). The way Hajishirzi sees it, while the open source releases to date have been valuable and even boundary-pushing, they’ve missed the mark in various ways.

AI2 sees OLMo as a platform, not just a model — one that’ll allow the research community to take each component AI2 creates and either use it themselves or seek to improve it. Everything AI2 makes for OLMo will be openly available, Hajishirzi says, including a public demo, training data set and API, and documented with “very limited” exceptions under “suitable” licensing.

“We’re building OLMo to create greater access for the AI research community to work directly on language models,” Hajishirzi said. “We believe the broad availability of all aspects of OLMo will enable the research community to take what we are creating and work to improve it. Our ultimate goal is to collaboratively build the best open language model in the world.”

OLMo’s other differentiator, according to Noah Smith, senior director of NLP research at AI2, is a focus on enabling the model to better leverage and understand textbooks and academic papers as opposed to, say, code. There’s been other attempts at this, like Meta’s infamous Galactica model. But Hajishirzi believes that AI2’s work in academia and the tools it’s developed for research, like Semantic Scholar, will help make OLMo “uniquely suited” for scientific and academic applications.

“We believe OLMo has the potential to be something really special in the field, especially in a landscape where many are rushing to cash in on interest in generative AI models,” Smith said. “AI2’s unique ability to act as third party experts gives us an opportunity to work not only with our own world-class expertise but collaborate with the strongest minds in the industry. As a result, we think our rigorous, documented approach will set the stage for building the next generation of safe, effective AI technologies.”

That’s a nice sentiment, to be sure. But what about the thorny ethical and legal issues around training — and releasing — generative AI? The debate’s raging around the rights of content owners (among other affected stakeholders), and countless nagging issues have yet to be settled in the courts.

To allay concerns, the OLMo team plans to work with AI2’s legal department and to-be-determined outside experts, stopping at “checkpoints” in the model-building process to reassess privacy and intellectual property rights issues.

“We hope that through an open and transparent dialogue about the model and its intended use, we can better understand how to mitigate bias, toxicity, and shine a light on outstanding research questions within the community, ultimately resulting in one of the strongest models available,” Smith said.

What about the potential for misuse? Models, which are often toxic and biased to begin with, are ripe for bad actors intent on spreading disinformation and generating malicious code.

Hajishirzi said that AI2 will use a combination of licensing, model design and selective access to the underlying components to “maximize the scientific benefits while reducing the risk of harmful use.” To guide policy, OLMo has an ethics review committee with internal and external advisors (AI2 wouldn’t say who, exactly) that’ll provide feedback throughout the model creation process.

We’ll see to what extent that makes a difference. For now, a lot’s up in the air — including most of the model’s technical specs. (AI2 did reveal that it’ll have around 70 billion parameters, parameters being the parts of the model learned from historical training data.) Training’s set to begin on LUMI’s supercomputer in Finland — the fastest supercomputer in Europe, as of January — in the coming months.

AI2 is inviting collaborators to help contribute to — and critique — the model development process. Those interested can contact the OLMo project organizers here

AI2 is developing a large language model optimized for science by Kyle Wiggers originally published on TechCrunch

Ascend raises $25 million for pre-seed AI startups in the Pacific Northwest

Investing in artificial intelligence (AI) startups is the latest bandwagon VCs are piling onto. But as last year’s crypto experts quickly work to rebrand as AI experts, they’ll have to compete with the VCs who have been investing in the category all along.

Seattle-based Ascend is one of them. Firm founder and solo GP Kirby Winfield has been involved in the AI sector as either a founder or investor since the 90s. Now that seemingly every VC has turned their attention to the category he told TechCrunch he’s glad he’s been in it for so long and therefore will not make some of the mistakes newer entrants will.

“It’s so easy to throw together a vertical AI demo,” Winfield told TechCrunch. “You see a lot of folks who would have been decent SaaS founders, trying to be decent AI founders. I would say it is pretty easy to identify who has actual chops from a technical perspective. We are really fortunate to be investing at this time regardless of the hype.”

Ascend is announcing the close of $25 million for its second fund. Winfield said the firm will invest in pre-seed AI and machine learning (ML) companies largely based in the Pacific Northwest. This continues the firm’s strategy from its first fund which raised $15 million and started deploying in 2019.

Winfield isn’t fully avoiding the hype though. The firm hasn’t always only focused on AI and ML. Ascend’s Fund I also invested in brands and marketplaces too, areas it is stepping away from with this latest batch of capital.

The fund was raised 100% from individuals, Winfield said, and consists of two vehicles: one that raised $22.5 million and another that raised $2.5 million from existing portfolio company founders. Winfield said he was able to raise $21 million in the first month the fund was open before letting it sit open for almost the entirety of 2022 hoping to see some additional funds mosey in, a process he also ran for Fund I.

“I would say that money trickled in a lot more strongly in 2019 when I raised Fund I,” Winfield said. “I couldn’t really think of a good reason to close the fund. We got another $3 million in the door by leaving it open. I don’t overthink these things too much.”

Winfield added that many of the Fund I LPs were happy to reup now that the industry’s notion around investing in AI has changed dramatically since Winfield raised Fund I.

But as every startup is rewriting their marketing to call themselves an AI company, Winfield said he is intentional about the kind of companies he backs. He said he isn’t looking for AI companies necessarily but instead is focused on startups that will utilize the tech to find a better solution.

“AI doesn’t matter,” he said. “What matters is the solution you are selling to your customers. Many founders and investors are getting wrapped around the axle and putting the technology and solution before the benefit.”

Companies from Fund I that fit that bill according to Winfield include Xembly, which uses AI to create a virtual chief of staff, Fabric, which operates as a “headless” e-commerce platform, and WhyLabs, an AI observability platform.

This fund also doubles down on the firm’s focus on companies in the Pacific Northwest, with a particular focus on Seattle. While that might sound limiting for folks who focus on Silicon Valley, Winfield disagrees, citing the talent that comes out of Microsoft and Amazon and the companies that are incubated at the nonprofit Allen Institute for Artificial Intelligence, where Winfield has been the investor in residence for nearly six years.

But no matter his experience and intention, it may still be hard for Winfield to compete with the rapidly growing flock of AI investors. Plus, even if he brings a beneficial background, he doesn’t come with the same deep pockets some of his fellow VCs have — Bessemer just announced they are putting $1 billion of their already raised capital toward the strategy. Plus, we all know how aggressive VCs chasing hype can be.

Xembly founder and CEO Pete Christothoulou said that despite the market’s noise, companies should look to work with VCs like Winfield because while everyone is looking to put money to work in AI, not all support is created equal.

“An AI fund without the right underpinnings is just money,” Christothoulou said. “The money is nice but you want the relationships that the investor can bring. If they can baseline their advice and real technical guidance, that’s where it starts getting really interesting and [Winfield] has a big opportunity.”

Ascend raises $25 million for pre-seed AI startups in the Pacific Northwest by Rebecca Szkutak originally published on TechCrunch

Ascend raises $25 million for pre-seed AI startups in the Pacific Northwest

Investing in artificial intelligence (AI) startups is the latest bandwagon VCs are piling onto. But as last year’s crypto experts quickly work to rebrand as AI experts, they’ll have to compete with the VCs who have been investing in the category all along.

Seattle-based Ascend is one of them. Firm founder and solo GP Kirby Winfield has been involved in the AI sector as either a founder or investor since the 90s. Now that seemingly every VC has turned their attention to the category he told TechCrunch he’s glad he’s been in it for so long and therefore will not make some of the mistakes newer entrants will.

“It’s so easy to throw together a vertical AI demo,” Winfield told TechCrunch. “You see a lot of folks who would have been decent SaaS founders, trying to be decent AI founders. I would say it is pretty easy to identify who has actual chops from a technical perspective. We are really fortunate to be investing at this time regardless of the hype.”

Ascend is announcing the close of $25 million for its second fund. Winfield said the firm will invest in pre-seed AI and machine learning (ML) companies largely based in the Pacific Northwest. This continues the firm’s strategy from its first fund which raised $15 million and started deploying in 2019.

Winfield isn’t fully avoiding the hype though. The firm hasn’t always only focused on AI and ML. Ascend’s Fund I also invested in brands and marketplaces too, areas it is stepping away from with this latest batch of capital.

The fund was raised 100% from individuals, Winfield said, and consists of two vehicles: one that raised $22.5 million and another that raised $2.5 million from existing portfolio company founders. Winfield said he was able to raise $21 million in the first month the fund was open before letting it sit open for almost the entirety of 2022 hoping to see some additional funds mosey in, a process he also ran for Fund I.

“I would say that money trickled in a lot more strongly in 2019 when I raised Fund I,” Winfield said. “I couldn’t really think of a good reason to close the fund. We got another $3 million in the door by leaving it open. I don’t overthink these things too much.”

Winfield added that many of the Fund I LPs were happy to reup now that the industry’s notion around investing in AI has changed dramatically since Winfield raised Fund I.

But as every startup is rewriting their marketing to call themselves an AI company, Winfield said he is intentional about the kind of companies he backs. He said he isn’t looking for AI companies necessarily but instead is focused on startups that will utilize the tech to find a better solution.

“AI doesn’t matter,” he said. “What matters is the solution you are selling to your customers. Many founders and investors are getting wrapped around the axle and putting the technology and solution before the benefit.”

Companies from Fund I that fit that bill according to Winfield include Xembly, which uses AI to create a virtual chief of staff, Fabric, which operates as a “headless” e-commerce platform, and WhyLabs, an AI observability platform.

This fund also doubles down on the firm’s focus on companies in the Pacific Northwest, with a particular focus on Seattle. While that might sound limiting for folks who focus on Silicon Valley, Winfield disagrees, citing the talent that comes out of Microsoft and Amazon and the companies that are incubated at the nonprofit Allen Institute for Artificial Intelligence, where Winfield has been the investor in residence for nearly six years.

But no matter his experience and intention, it may still be hard for Winfield to compete with the rapidly growing flock of AI investors. Plus, even if he brings a beneficial background, he doesn’t come with the same deep pockets some of his fellow VCs have — Bessemer just announced they are putting $1 billion of their already raised capital toward the strategy. Plus, we all know how aggressive VCs chasing hype can be.

Xembly founder and CEO Pete Christothoulou said that despite the market’s noise, companies should look to work with VCs like Winfield because while everyone is looking to put money to work in AI, not all support is created equal.

“An AI fund without the right underpinnings is just money,” Christothoulou said. “The money is nice but you want the relationships that the investor can bring. If they can baseline their advice and real technical guidance, that’s where it starts getting really interesting and [Winfield] has a big opportunity.”

Ascend raises $25 million for pre-seed AI startups in the Pacific Northwest by Rebecca Szkutak originally published on TechCrunch

Eventbrite integrates GPT capabilites into platform to aid the event planning process

Eventbrite, an event management and ticketing website, announced Monday new GPT-powered tools that will help event creators with arguably some of the most tedious, time-consuming steps in event planning—event pages, email campaigns and social media ads. The features will roll out this month.

Now, users can create an event page on Eventbrite with automatically generated event descriptions and images that are based on the event title, location and date. It also fills in the event type, category and sub-category, which will likely make it easier for people to discover events.

If event creators get writer’s block when writing an email campaign or social media ad, Eventbrite’s new tools generate copy to help market the event more efficiently, a company spokesperson told TechCrunch.

The company has introduced other AI-driven tools in the past, such as Eventbrite Boost, the marketing platform that uses machine learning to target audiences on social media platforms like Instagram and Facebook. Boost has three tiers– Lite ($15/month), Core ($50/month) and Pro ($100/month.)

Note that the new copywriting feature for social media ads is only available to paid Boost subscribers.

Eventbrite hopes the GPT integrations will help reduce time spent by event creators since it provides a starting point for them to revise and edit generated copy rather than start from scratch. The company also launched these features as a way to increase engagement, reach more audiences and drive ticket sales. The tools will likely be helpful for independent hosts who may not be able to afford outside help from assistants or copywriters.

“Eventbrite’s creators and consumers remain the north star of our business. As we continue to innovate, integrating AI into our platform will help accelerate the business growth of our global community of event entrepreneurs by helping them save time and reach more attendees faster,” co-Founder and CEO Julia Hartz said in a statement.

The company plans to build on these newly launched AI-powered tools to further streamline the event hosting experience, the Eventbrite spokesperson added.

For years, AI technology has changed the game for event planners, from promotional tools and marketing strategies to chatbots, data analysis, recommendations and more. Even the wedding planning platform Joy hopped on the ChatGPT bandwagon this year, launching an OpenAI-powered tool that generates drafts for vows and wedding-focused speeches.

Eventbrite integrates GPT capabilites into platform to aid the event planning process by Lauren Forristal originally published on TechCrunch