Microsoft’s AI-powered Designer tool comes to Teams

Designer, Microsoft’s AI-powered art-generating tool, is coming to the free version of Teams.

Starting today in preview on Windows 11, Microsoft Teams users can tap Designer, a Canva-like app, to generate designs for presentations, posters, digital postcards and more to share on social media and other channels. Designer accepts text prompts or uploaded images, and leverages DALL-E 2, OpenAI’s text-to-image AI, to ideate designs — with drop-downs and text boxes for further customization and personalization.

Designer, which is also available via the web and in Microsoft’s Edge browser through the sidebar, was originally announced last October. New features, including caption generation and animated visuals, arrived in April, and Microsoft promised more — like advanced editing features — are on the way.

Microsoft’s ultimate goal is to monetize Designer through Microsoft 365 Personal and Family subscriptions, but the company hasn’t telegraphed where pricing might land, exactly. It has said, however, that some of the tool’s functionality will remain free. Which functionality is anyone’s guess.

Microsoft Teams Designer

Image Credits: Microsoft

Other Teams updates announced today have decidedly less to do with AI.

Now, users of GroupMe, Microsoft’s free group messaging app, can create Teams calls by starting a call from inside any new or existing group chat.

And beginning this week, Teams’ communities feature, which lets users connect, share and collaborate in Discord-like groups, is supported on Windows 11 (with macOS and Windows 10 compatibility to come down the line). As with Teams communities on other platforms, Windows 11 users can create communities, host events, moderate content and get notified about upcoming events and activities.

A new communities discovery feature, set to roll out in the coming days on Windows 11, iOS and Android, allows Teams users to join communities focused on topics like parenting, gaming, gardening, technology and remote work. (It’s up to community owners on iOS and Android to enable their communities to be discovered on Teams, Microsoft says.) Owners can approve or reject requests to join their communities and assign owner controls to others in the group, as well as create polls via MSForms and share posts as emails, if they so choose.

In a related update, Teams community members can now record videos from their mobile devices using a new capture experience with updated filters and markup tools. And on iOS, community owners can scan and invite emails or phone numbers from an online document, paper directory or other list using their phone camera.

The plethora of new features arrives as Teams continues to grow, majorly bolstered by remote and hybrid work trends. The number of daily active Teams users almost doubled from 2021 to 2022, increasing from 145 million users to 270 million.

Microsoft’s AI-powered Designer tool comes to Teams by Kyle Wiggers originally published on TechCrunch

Top Indian tech advocacy group replaces Big Tech execs following criticism

The Internet and Mobile Association of India, an influential tech industry body, has appointed Dream Sports co-founder and chief executive Harsh Jain as the new chairperson of the association, bucking the tradition of giving the top roles to Big Tech executives after receiving criticism from many Indian startups.

Rajesh Magow, co-founder and group chief executive of MakeMyTrip, will now serve as the Vice Chairman of IAMAI, whereas Times Internet’s Satyan Gajwani has been appointed as the industry body’s treasurer. Magow steps into the role previously held by Meta’s Shivnath Thukral, while Jain takes over from Google India head Sanjay Gupta.

The appointments, result of an internal election, come in the wake of numerous top Indian entrepreneurs asserting that the IAMAI had lost credibility due to the positions it adopted on regulations and policies being drafted by New Delhi.

Anupam Mittal of Shaadi.com claimed that IAMAI’s viewpoints, which largely echoed those of Big Tech, indicated that the influential lobby group had become a “mouthpiece” of American tech giants.

The recent strain is the newest indication of a fracture in the relationship between Google, Amazon, and other major U.S. corporations, and Indian companies.

Google, Meta, Amazon, and Microsoft have poured tens of billions of dollars into India in the past decade. This investment has been aimed at transforming the world’s most populated country into a key international market, particularly as user growth has been decelerating in other parts of the globe.

“The new 24-member Governing Council and the new Executive Council of the IAMAI will take charge from the present councils at the upcoming Annual General Meeting. The IAMAI Governing Council election is held every two years. Eighty-three members of the IAMAI contested the elections this year following the end of the two-year tenure of the previous council,” IAMAI said in a statement Thursday.

More to follow.

Top Indian tech advocacy group replaces Big Tech execs following criticism by Manish Singh originally published on TechCrunch

Microsoft’s AI reaches Indian villages

Merely months have passed since Microsoft and OpenAI unveiled ChatGPT to the world, sparking a fervor among tech enthusiasts and industry titans. Now, the technology that underpins this generative AI is breaking barriers, reaching remote hamlets hundreds of miles away from the tech hubbubs of Seattle and San Francisco.

Jugalbandi, a chatbot built in collaboration by Microsoft, the open-source initiative OpenNyAI, and AI4Bharat, backed by the Indian government, is showing signs of progress in redefining information access for villagers in India, offering insights into more than 170 government programs in 10 indigenous languages.

While India is the world’s second-largest wireless market, the technological progress witnessed in its cities is starkly absent in smaller towns and villages. Only a meager 11% of the country’s populace is proficient in English, with a slight majority of 57% comfortable with Hindi. These communities also grapple with literacy issues, lacking even regular access to conventional media.

“That leaves vast numbers of the population unable to access government programs because of language barriers,” Microsoft explained in a blog post.

To bridge this gap, Jugalbandi leverages a platform with near-universal recognition in India: WhatsApp. With the aid of language models from AIBharat and reasoning models from Microsoft Azure OpenAI Service, Jugalbandi empowers individuals to pose questions and receive responses in both text and voice, in their local language.

“This time around this technology reaches everybody in the world,” said Microsoft chief executive Satya Nadella at company’s Build conference Tuesday. “There are two things that stood out for me: Things that we build can infact make a different to 8 billion people, not just some small group of people .. and to be able to do that by diffusion that takes days and weeks not years and centuries because we want that equitable growth and trust in technology that we protect the fundamental rights that we care about.”

Microsoft envisions Jugalbandi expanding its reach, ultimately aiding villagers with a broad spectrum of needs, with India proving to be an ideal ground for the tech titan.

The U.S. tech giant is also furthering its collaborations with numerous Indian enterprises aimed at democratizing information access for the broader populace. One such firm is Gram Vaani. Delhi-based Gram Vaani runs an interactive voice-responsive platform. This system enables volunteers to extend personalized assistance and advice to farmers. The firm says it has amassed 3 million users across northern and central India.

Microsoft’s AI reaches Indian villages by Manish Singh originally published on TechCrunch

28 years later, Windows finally supports RAR files

It’s 1999, and my friends and I are surfing warez sites using Internet Explorer on our 98SE gaming rig. Finally we push past the scams and porn to find a list of files on an FTP server, labeled “.rar, .r00, .r01, r.02…” But what the hell are these?

“Oh, it’s segmented. You have to download this program to expand those, it’s called WinRAR. Way better than WinZip.”

“Do we have to pay for it?”

“No… but if you’re as cheap as I think you are, it’ll keep bugging you to for a quarter of a century until, in the grim darkness of 2023, Windows 11 finally supports the format natively.”

In retrospect my friend’s comment was amazingly prescient. How could he know how grim and how dark the future would be? How could he predict that Windows would switch back to sequential numbering, but skip 9? And how did he know that I am so, shall we say thrifty, that rather than paying $30, I would for more than two decades just try to get my task done in WinRAR fast enough that the “Please purchase WinRAR license” popup didn’t have a chance to appear?

Yes, it has taken the better part of three decades for the .rar file to finally be supported in Windows without any kind of additional software. Back in the ’90s it was just one of several competing compression apps (or as they were called back then, “applications”), for the purposes of shrinking collections of files so they could be more efficiently transferred over our woefully slow internet.

How long did it take for us to download the Star Trek set of screensavers for After Dark from the dial-up BBS, using the telnet app WhiteKnight, you ask? Overnight. It was, after all, a shade over five megabytes. But had it not been a .sea (self-extracting archive) courtesy of Stuffit we would have been waiting well into the next day.

Yes, compression was a must back then, in my case as a young software pirate but of course in more legitimate ways like software distribution and actual “archival” purposes. I can’t speak to whether WinRAR was as common among enterprises as it was among procurers of illicitly duplicated games and applications. But the fact that it has lived a full 30 years since its original development as a DOS program (28 since it arrived on Windows), up until its most recent release — last week, and still nearly small enough to fit on a 3.5″ hard floppy — suggests it found its niche.

As time has advanced however the necessity of apps like WinRAR has diminished, as both drive capacity and network bandwidth have increased exponentially. The handful of megabytes that once took me overnight to download and represented a considerable proportion of my hard disk are now the bare minimum to transfer in a single second if you want to call your connection “broadband.” Furthermore, open source standards and options have proliferated, such as the libarchive project.

Then, at some point, someone at Microsoft must have gotten fed up with rushing their .rar operations the way I have for 20 years and thought, there must be a better way. And so, under the subheading of “Reducing toil,” we have a few helpful UI updates, then casually and apropos of nothing, this:

In addition…

We have added native support for additional archive formats, including tar, 7-zip, rar, gz and many others using the libarchive open-source project. You now can get improved performance of archive functionality during compression on Windows.

Of course the library has been integrated with other OSes for a long time, and native support for .rar files is old hat for many. But for me personally this change is epochal.

I have still found uses for WinRAR over the years, some legal, some… perhaps most not so legal. And it has never been lost on me that, in the midst of my piracy I was doubly a pirate, for I was several decades past the end of my the 40 day WinRAR trial period. When my alacrity was lacking (my APMs have fallen of late) I would see that nag screen and think: am I really that petty? Will I really continue to abuse this poor shareware for my entire life? When will I set myself upon the straight and narrow once more (if ever I was on it to begin with) and make an honest app of WinRAR?

Reader, I purchased WinRAR.

Image Credits: Devin Coldewey / TechCrunch

It seems only fair that I pay the cost of a coffee — as you know, about $31 these days — to support a piece of software that is among very, very few to travel with me for much of my computing life. Few other programs have been as constant a companion, though I would pay for Winamp if I could.

(Plus, I haven’t updated to Windows 11 and won’t until there’s no other option, so I don’t have the benefit of this particular integration.)

I don’t know what the future holds for WinRAR; I’ve asked the company what it thinks Windows officially adopting the format will mean for its software and business and will update if I hear back.

28 years later, Windows finally supports RAR files by Devin Coldewey originally published on TechCrunch

Kenya’s Tawi takes on auditory processing disorder to win Microsoft Imagine Cup

Microsoft’s student tech-for-good competition, the Imagine Cup, has crowned this year’s winner: Tawi, a team from Kenya that applied machine learning tools to helping kids with auditory processing disorder understand others better.

APD is a hearing condition in which someone hears sound just fine but their brain has trouble processing it. This can lead to delays in learning and understanding speech, as well as everyday inconvenience as communication takes more work and concentration.

John Onsongo Mabeya, Muna Numan Said, Syntiche Musawu Cishimbi and Zakariya Hussein Hassan formed Tawi because they all wanted to make something in educational tech, and decided on APD since one of the members has a sibling living with it.

Ordinarily a hearing aid is what is prescribed, as it can help isolate and emphasize voices. But hearing aids can be difficult to obtain and maintain depending on where you live, and it may not even be the right solution. Knowing what’s possible in real-time sound processing and captioning, the team decided to make an APD-focused tool that works with an ordinary smartphone and headphones.

“Tawi is a Swahili name, and in English it means sprouting leaf. Children are the sprouting generation and we wanted to do something for them so they could be uplifted and reach their full potential,” said Said.

Team Tawi’s app does noise suppression, emphasizes speech, and converts that speech to text in real time, and can be configured to a kid’s specific needs and hearing issues.

“We believe that Tawi, which uses real-time speech recognition and amplification, could be a game changer for these children, allowing them to participate more fully in social and educational settings,” Said put in their project description. “Our hope is that Tawi will eventually become widely available and help to address a critical community need.”

The final winner of the Imagine Cup, winnowed down from three regional finalists and several more category winners, takes home $100,000, some face time with Microsoft CEO Satya Nadella, and “Level 2 access to Microsoft for Startups Founders Hub,” which hopefully they find useful.

Tawi’s team together with their laptops.

The other two finalists also deserve a mention:

Cardiac Self-Monitoring Tool from Thailand: These kids put together a device that connects a stethoscope to a smartphone and uses machine learning to help parse the incoming sound, letting people check themselves for anomalies.

Eupnea: A US team that uses AI to listen to the coughs of tuberculosis patients and helps recommend treatment options.

You can check out the other finalists in this recent blog post. Congrats to everyone who made it!

Kenya’s Tawi takes on auditory processing disorder to win Microsoft Imagine Cup by Devin Coldewey originally published on TechCrunch

Microsoft launches an AI tool to take the pain out of building websites

Microsoft wants to take the pain out of designing web pages. AI is its solution.

Read more about Microsoft Build 2023Today marks the launch of Copilot in Power Pages in preview for U.S. customers, an AI-powered assistant for Microsoft’s low-code business website creation tool, Power Pages. Given prompts, Copilot can generate text, forms, chatbots and web page layouts as well as create and edit image and site design themes.

To create a form, for example, users can simply describe the kind of form that they need and Copilot will build it and auto-generate the necessary back-end database tables. Those tables can then be edited, added to or removed using natural language within Copilot.

“As the maker, you describe what you need in natural language and use Copilot suggestions to design web pages, create content and build complex business data forms for your website,” Sangya Singh, VP of Power Pages at Microsoft, told TechCrunch in an email interview. “You no longer need to start from a blank slate.”

Generating a website with AI isn’t exactly a novel idea — not in this day and age, at least. Tools like Jasper can handle copy, images, layouts and more, while generators like Mixo can create basic splash pages when given a short description.

But Singh paints Copilot in Power Pages as more versatile than the competing solutions out there, while stressing that it’s not a tool that could — or should — be used to generate whole spam sites.

“Power Pages now allows you to go from no code (describing the site via natural language) to low code (editing the website design and layouts using the design studio) to pro code (building advanced customization with familiar web frameworks) seamlessly,” she said. “For Power Pages, crafting Copilot experiences within Power Pages is revolutionary because enabling an AI assistant to build business data-centric sites using natural language has not been done before.”

Copilot in Power Pages

Image Credits: Microsoft

Of course, depending on the domain and use case, adding generative AI to the mix can be a risky proposition. Even if it’s not the original intent, AI can be prompted to generate toxic content. And it can go off the rails if not closely monitored.

Singh claims that Copilot in Power Pages, though, which is powered by OpenAI’s GPT-3.5 model, has “guardrails” to protect against issues that might crop up.

“We take the website maker’s user prompts to the Copilot, get suggestions from the large language model, and do a lot of processing, like offensive content filtering, before displaying suggestions back to the maker,” Singh said. “If Copilot’s suggestions are irrelevant or inappropriate, makers can easily report the AI generated output via a thumbs-down gesture in our experience and provide additional feedback.”

What about the aforementioned chatbot, also powered by GPT-3.5, that Power Pages users can now insert into their websites? According to Singh, it’s similarly built with safeguards, including a whitelist of URLs that it’ll look through to get answers.

“The key thing to note is that Power Pages Copilot is not an ‘automatic’ AI-pilot generating websites, but an ‘AI assistant’ to a human website maker — hence the name Copilot — where the maker can ask for suggestions on how to build different components of a business data-centric site,” she added. “Giving the makers ‘total control’ is a principle we have where the maker is always in control if they want to apply the Copilot suggestion or tweak it further or discard it.”

Microsoft launches an AI tool to take the pain out of building websites by Kyle Wiggers originally published on TechCrunch

Microsoft’s Azure AI Studio lets developers build their own AI ‘copilots’

Microsoft wants companies to build their own AI-powered “copilots” — using tools on Azure and machine learning models from its close partner OpenAI, of course.

Read more about Microsoft Build 2023Today at its annual Build conference, Microsoft launched Azure AI Studio, a new capability within the Azure OpenAI Service that lets customers combine a model like OpenAI’s ChatGPT or GPT-4 with their own data — whether text or images — and build a chat assistant or another type of app that “reasons over” the private data. (Recall that Azure OpenAI Service is Microsoft’s fully managed, enterprise-focused product designed to give businesses access to AI lab OpenAI’s technologies with added governance features.)

Microsoft defines a “copilot” as a chatbot app that uses AI, typically text-generating or image-generating AI, to assist with tasks like writing a sales pitch or generating images for a presentation. The company has created several such apps, such as Bing Chat. But its AI-powered copilots can’t necessarily draw on a company’s proprietary data to perform tasks — unlike copilots created through Azure AI Studio.

“In our Azure AI Studio, we’re making it easy for developers to ground Azure OpenAI Service models on their data … and do that securely without seeing that data or having to train a model on the data.” John Montgomery, Microsoft’s CVP of AI platform, told TechCrunch via email. “It’s a tremendous accelerant for our customers to be able to build their own copilots.”

In Azure AI Studio, the copilot-building process starts with selecting a generative AI model like GPT-4. The next step is giving the copilot a “meta-prompt,” or a base description of the copilot’s role and how it should function.

Cloud-based storage can be added to AI copilots created with Azure AI Studio for the purposes of keeping track of a conversation with a user and responding with the appropriate context and awareness. Plugins extend copilots, giving them access to third-party data and other services.

Microsoft Copilots

Image Credits: Microsoft

Microsoft believes the value proposition in Azure AI Studio is allowing customers to leverage OpenAI’s models on their own data, in compliance with their organizational policies and access rights and without compromising things like security, data policies or document ranking. Customers can choose to integrate internal or external data that their organization owns or has access to, including structured, unstructured or semi-structured data.

With Azure AI Studio, Microsoft’s making a push for customized models built using its cloud-hosted tooling. It’s a potentially lucrative line of revenue as the Azure OpenAI Service continues to grow — Microsoft says that it’s currently serving more than 4,500 companies, including Coursera, Grammarly, Volvo and IKEA.

Upgrades to Azure OpenAI Service

To further incentivize Azure OpenAI Service adoption, Microsoft’s rolling out updates aimed at boosting capacity for high-volume customers.

A new feature called the Provisioned Throughput SKU allows Azure OpenAI Service customers to reserve and deploy model processing capacity on a monthly or yearly basis. Customers can purchase “provisioned throughput units,” or PTUs, to deploy OpenAI models including GPT-3.5-Turbo or GPT-4 with reserved processing capacity during the commitment period.

OpenAI previously offered dedicated capacity for ChatGPT via its API. But Provisioned Throughput SKU greatly expands on this — and with a bent toward the enterprise.

“With reserved processing capacity, customers can expect consistent latency and throughput for workloads with consistent characteristics such as prompt size, completion size and number of concurrent API requests,” a Microsoft spokesperson told TechCrunch via email.

Microsoft’s Azure AI Studio lets developers build their own AI ‘copilots’ by Kyle Wiggers originally published on TechCrunch

Microsoft goes all in on plug-ins for AI apps

Microsoft aims to extend its ecosystem of AI-powered apps and services, called “copilots,” with plug-ins from third-party developers.

Read more about Microsoft Build 2023Today at its annual Build conference, Microsoft announced that it’s adopting the same plug-in standard its close collaborator, OpenAI, introduced for ChatGPT, its AI-powered chatbot  — allowing developers to build plug-ins that work across ChatGPT, Bing Chat (on the web and in the Microsoft Edge sidebar), Dynamics 365 Copilot, Microsoft 365 Copilot and the newly launched Windows Copilot.

“I think over the coming years, this will become an expectation for how all software works,” Kevin Scott, Microsoft’s CTO, said in a blog post shared with TechCrunch last week.

Bold pronouncements aside, the new plug-in framework lets Microsoft’s family of “copilots” — apps that use AI to assist users with various tasks, such as writing an email or generating images — interact with a range of different software and services. Using IDEs like Visual Studio, Codespaces and Visual Studio Code, developers can build plug-ins that retrieve real-time information, incorporate company or other business data and take action on a user’s behalf.

A plug-in could let the Microsoft 365 Copilot, for example, make arrangements for a trip in line with a company’s travel policy, query a site like WolframAlpha to solve an equation or answer questions about how certain legal issues at a firm were handled in the past.

Customers in the Microsoft 365 Copilot Early Access Program (plus ChatGPT Plus subscribers) will gain access to new plug-ins from partners in the coming weeks, including Atlassian, Adobe, ServiceNow, Thomson Reuters, Moveworks, and Mural. Bing Chat, meanwhile, will see new plug-ins added to its existing collection from Instacart, Kayak, Klarna, Redfin and Zillow, and those same Bing Chat plug-ins will come to Windows within Windows Copilot.

The OpenTable plug-in allows Bing Chat to search across restaurants for available bookings, for example, while the Instacart plug-in lets the chatbot take a dinner menu, turn it into a shopping list and place an order to get the ingredients delivered. Meanwhile, the new Bing plug-in brings web and search data from Bing into ChatGPT, complete with citations.

A new framework

Scott describes plug-ins as a bridge between an AI system, like ChatGPT, and data a third party wants to keep private or proprietary. A plug-in gives an AI system access to those private files, enabling it to, for example, answer a question about business-specific data.

There’s certainly growing demand for such a bridge as privacy becomes a major issue with generative AI, which has a tendency to leak sensitive data, like phone numbers and email addresses, from the data sets on which it was trained. Looking to minimize risk, companies including Apple and Samsung have banned employees from using ChatGPT and similar AI tools over concerns employees might mishandle and leak confidential data to the system.

“What a plug-in does is it says ‘Hey, we want to make that pattern reusable and set some boundaries about how it gets used,” John Montgomery, CVP of AI platform at Microsoft, said in a canned statement.

There are three types of plug-ins within Microsoft’s new framework: ChatGPT plug-ins, Microsoft Teams message extensions and Power Platform connectors.

Microsoft Copilot plugins

Image Credits: Microsoft

Teams message extensions, which allows users interact with a web service through buttons and forms in Teams, aren’t new. Nor are Power Platform connectors, which act as a wrapper around an API that allows the underlying service to “talk’ to apps in Microsoft’s Power Platform portfolio (e.g. Power Automate). But Microsoft’s expanding their reach, letting developers tap new and existing message extensions and connectors to extend Microsoft 365 Copilot, the company’s assistant feature for Microsoft 365 apps and services like Word, Excel and PowerPoint.

For instance, Power Platform connectors can be used to import structured data into the “Dataverse,” Microsoft’s service that stores and manages data used by internal business apps, that Microsoft 365 Copilot can then access. In a demo during Build, Microsoft showed how Dentsu, a public relations firm, tapped Microsoft 365 Copilot together with a plug-in for Jira and data from Atlassian’s Confluence without having to write new code.

Microsoft says that developers will be able to create and debug their own plug-ins in a number of ways, including through its Azure AI family of apps, which is adding capabilities to run and test plug-ins on private enterprise data. Azure OpenAI Service, Microsoft’s managed, enterprise-focused product designed to give businesses access to OpenAI’s technologies with added governance features, will also support plug-ins. And Teams Toolkit for Visual Studio will gain features for piloting plug-ins.

Transitioning to a platform

As for how they’ll be distributed, Microsoft says that developers will be able to configure, publish and manage plug-ins through the Developer Portal for Teams, among other places. They’ll also be able to monetize them, although the company wasn’t clear on how, exactly, pricing will work.

In any case, with plug-ins, Microsoft’s playing for keeps in the highly competitive generative AI race. Plug-ins transform the company’s “copilots” into aggregators, essentially — putting them on a path to becoming one-stop-shops for both enterprise  and consumer customers.

Microsoft no doubt perceives the lock-in opportunity as increasingly key as the company faces competitive pressure from startups and tech giants alike building generative AI, including Google and Anthropic. One could imagine plug-ins becoming a lucrative new source of revenue down the line as apps and services rely more and more on generative AI. And it could allay the fears of businesses who claim generative AI trained on their data violates their rights; Getty Images and Reddit, among others, have taken steps to prevent companies from training generative AI on their data without some form of compensation.

I’d expect rivals to answer Microsoft’s and OpenAI’s plug-ins framework with plug-ins frameworks of their own. But Microsoft has a first-mover advantage, as OpenAI had with ChatGPT. And that can’t be underestimated.

Microsoft goes all in on plug-ins for AI apps by Kyle Wiggers originally published on TechCrunch

Microsoft wants to make Windows a better place for developers

While mostly a developer event, Build has long been where Microsoft puts a spotlight on consumer-centric updates to Windows. This year, the company is taking a different approach: It is highlighting the work it is doing to improve the developer experience on Windows. And we’re talking about major updates here — all of which will come to the Windows Insider dev channel this week.

GitHub Copilot X, for example, is coming to the Windows Terminal and the company is also launching a new extensible open source Windows app (Dev Home) that allows users to quickly set up their machines, connect to their code repositories and add widgets to track their projects or monitor their local machine’s performance.

Microsoft is also launching a new type of storage volume for Windows 11, Dev Drive, that is based on the same Resilient File System the company uses for Azure and that promises up to 30% performance improvements in build times. Essentially, this is the first time this file system is available for Windows client users, and thanks to cooperation with the Windows Defender team, Microsoft’s security tool can now scan these drives without blocking file operations.

Read more about Microsoft Build 2023All of this is happening against the backdrop of Windows seeing quite a bit of growth among developers (and especially Python developers). Microsoft says the number of developers who are using the platform increased by 24% last year. In part, this is driven by the arrival of the Windows Subsystem for Linux.

“In the last year, we’ve been listening to the community and seeing what’s the next set of things they really want us to do to improve the experience,” Michael Harsh, the group program manager for Microsoft’s Windows Platform team, told me. “Two key themes really emerged. The first one was that the pain of setting up an environment on Windows is a huge amount of toil. That’s a problem that’s existed as long as we’ve had visual installers — so kind of forever. And then, being able to improve the disk performance, especially for things like build times and working with package managers like Pip and NPM.”

To make it easier for developers to set up their machines, Microsoft now enables them to set up a WinGet configuration file to create unattended and repeatable configurations (WinGet is Microsoft’s command-line tool for managing and configuring Windows apps). This should make it considerably easier to onboard new developers to a new project and ensure that they use the right versions of their tools and frameworks. Harsh described it as adding orchestration to WinGet.

As for the Windows Terminal, the GitHub Copilot integration will be available to users who subscribe to the service through GitHub. It will offer both inline support as well as an experimental chat experience that can recommend commands, explain errors and even take actions in the Terminal app itself. Warp beat Microsoft to the punch here by integrating ChatGPT into its terminal a few months ago. Still, given that the Windows Terminal comes installed by default (it recently replaced the Windows Console as the default in Windows 11), Microsoft obviously has a lot of reach here.

Image Credits: Microsoft

To some degree, it’s Dev Home that’s bringing all of this together. The idea here is to build a single app that brings together all of the data and tools that developers need to manage Windows 11 as their development machine. This means they can use it to kick off these new WinGet configurations and configure their online Dev Boxes and GitHub Codespaces, for example, as well as set up the new Dev Drive and install new tools and packages, all without having to switch contexts.

There are also a few nice bonus features the company is quietly adding in the next Windows 11 release: You can now open tar, 7-zip, GZ and RAR files (among others) right from Windows Explorer without having to install any third-party tools. You can also now hide the time and date from the taskbar (useful for screen recordings).

Microsoft wants to make Windows a better place for developers by Frederic Lardinois originally published on TechCrunch

Microsoft launches new AI tool to moderate text and images

Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities.

Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text. The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action.

“Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years. We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”

During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement.

Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it.

In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.

Setting all that aside for a moment, Azure AI Content Safety — which protects against biased, sexist, racist, hateful, violent and self-harm content, according to Microsoft — is integrated into Azure OpenAI Service, Microsoft’s fully managed, corporate-focused product intended to give businesses access to OpenAI’s technologies with added governance and compliance features. But Azure AI Content Safety can also be applied to non-AI systems, such as online communities and gaming platforms.

Pricing starts at $1.50 per 1,000 images and $0.75 per 1,000 text records.

Azure AI Content Safety is similar to other AI-powered toxicity detection services including Perspective, maintained by Google’s Counter Abuse Technology Team and Jigsaw, and succeeds Microsoft’s own Content Moderator tool. (No word on whether it was built on Microsoft’s acquisition of Two Hat, a moderation content provider, in 2021.) Those services, like Azure AI Content Safety, offer a score from zero to 100 on how similar new comments and images are to others previously identified as toxic.

But there’s reason to be skeptical of them. Beyond Bing Chat’s early stumbles and Microsoft’s poorly targeted layoffs, studies have shown that AI toxicity detection tech still struggles to overcome challenges including biases against specific subsets of users.

Several years ago, a team at Penn State found that posts on social media about people with disabilities could be flagged as more negative or toxic by commonly used public sentiment and toxicity detection models. In another study, researchers showed that older version of Perspective often couldn’t recognize hate speech that used “reclaimed” slurs like “queer” and spelling variations such as missing characters.

The problem extends beyond toxicity-detectors-as-a-service. This week, a New York Times report revealed that, eight years after a controversy over Black people being mislabeled as gorillas by image analysis software, tech giants still fear repeating the mistake.

Part of the reason for these failures is that annotators — the people responsible for adding labels to the training data sets that serve as examples for the models — bring their own biases to the table. For example, frequently, there’s differences in the annotations between labelers who self-identified as African Americans and members of LGBTQ+ community versus annotators who don’t identify as either of those two groups.

To combat some of these issues, Microsoft allows the filters in Azure AI Content Safety to be fine-tuned for context. Bird explains:

For example, the phrase, “run over the hill and attack” used in a game would be considered a medium level of violence and blocked if the gaming system was configured to block medium severity content. An adjustment to accept medium levels of violence would enable the model to tolerate the phrase.

“We have a team of linguistic and fairness experts that worked to define the guidelines taking into account cultural, language and context,” a Microsoft spokesperson added. “We then trained the AI models to reflect these guidelines … AI will always make some mistakes, [however,] so for applications that require errors to be nearly non-existent we recommend using a human-in-the-loop to verify results.”

One early adopter of Azure AI Content Safety is Koo, a Bangalore, India-based blogging platform with a user base that speaks over 20 languages. Microsoft says it’s partnering with Koo to tackle moderation challenges like analyzing memes and learning the colloquial nuances in languages other than English.

We weren’t offered the chance to test Azure AI Content Safety ahead of its release, and Microsoft didn’t answer questions about its annotation or bias mitigation approaches. But rest assured we’ll be watching closely to see how Azure AI Content Safety performs in the wild.

Read more about Microsoft Build 2023

Microsoft launches new AI tool to moderate text and images by Kyle Wiggers originally published on TechCrunch