BluBracket scores $6.5M seed to help secure code in distributed environments

BluBracket, a new security startup from the folks who brought you Vera, came out of stealth today and announced a $6.5 million seed investment. Unusual Ventures led the round with participation by Point72 Ventures, SignalFire and Firebolt Ventures.

The company was launched by Ajay Aurora and Prakash Linga, who until last year were CEO and CTO respectively at Auroa, a security company that helps companies secure documents by having the security profile follow the document wherever it goes.

Aurora says he and Linga are entrepreneurs at heart and they were itching to start something new after more than five years at Vera . While both still sit on the Vera board, they decided to attack a new problem.

He says that the idea for BluBracket actually came out of conversations with Vera customers, who wanted something similar to Vera, except to protect code.”About 18-24 months ago, we started hearing from our customers, who were saying, ‘Hey you guys secure documents and files. What’s becoming really important for us is to be able to share code. Do you guys secure source code?'”

That was not a problem Vera was suited to solve, but it was a light bulb moment for Aurora and Linga, who saw an opportunity and decided to seize it. Recognizing the way development teams operated has changed, they started BluBracket and developed a pair of products to handle the unique set of problems associated with a distributed set of developers working out of a Git repository — whether that’s GitHub, GitLab or BitBucket.

The first product is BluBracket CodeInsight, which is an auditing tool, available starting today. This tool gives companies full visibility into who has withdrawn the code from the Git repository. “Once they have a repo, and then developers clone it, we can help them understand what clones exist on what devices, what third parties have their code, and even be able to search open source projects for code that might have been pushed into open source. So we’re creating what’s called a we call it a blueprint of where an enterprise code is,” Aurora explained.

The second tool, BluBracket CodeSecure, which won’t be available until later in the year, is how you secure that code including the ability to classify code by level importance. Code tagged with the highest level of importance will have special status and companies can attach rules to it like that it can’t be distributed to an open source folder without explicit permission.

They believe the combination of these tools will enable companies to maintain control over the code, even in a distributed system. Aurora says they have taken care to make sure that the system provides the needed security layer without affecting the operation of the continuous delivery pipeline.

“When you’re compiling or when you’re going from development to staging to production, in those cases because the code is sitting in Git, and the code itself has not been modified, BluBracket won’t break the chain,” he explained. If you tried to distribute special code outside the system, you might get a message that this requires authorization, depending on how the tags have been configured.

This is very early days for BluBracket, but the company takes its first steps as a startup this week as it emerges from stealth at the RSA security conference in San Francisco. It will be participating in the RSA Sandbox competition for early security startups at the conference, as well.

PullRequest snags remote developer hiring platform Moonlight in case of startup buying startup

PullRequest, a startup that provides code review as a service, announced today that it was buying Moonlight, an early stage startup that has built an online platform for hiring remote developers. The companies did not share the terms.

Lyal Avery, founder and CEO at PullRequest, says that he bought this company to expand his range of services. “Our platform is at a place where we’re very confident about our ability to identify issues. We’re moving to the next phase of fixing issues automatically. In order to do that we have to have access to people producing code. So with the developers on our platform that are currently reviewers, as well as the Moonlight folks, we can start to fix the issues we identify, and also attach that to our learning processes,” Avery explained.

This fits with the company’s vision of eventually automating common fixes. It’s currently working on building machine learning models to facilitate that automation. Moonlight gives PullRequest access to the platform’s data, which can help train and perfect the Beta models that the company is working on.

Avery says his vision isn’t to replace human developers, so much as to make them faster and more efficient than they are today. He says that from the time a bug is found in website code to the time it gets fixed is on average about six hours. He wants to reduce that to 20 minutes, and he believes that buying Moonlight will give him more data to get to that goal faster, while also expanding the range of services from code review to issue remediation.

It’s fairly unusual for a startup that has raised just over $12 million (according to Crunchbase data) to be out shopping for another, but Avery sees buying small companies like Moonlight as an excellent way to fill in gaps in the platform, while offering an easier path to expansion.

Moonlight is a small shop with just two employees, both who will be joining PullRequest, but it has 3000 developers on the platform, which PullRequest can now access. For now, Avery says that the companies will remain separate, and Moonlight will continue to operate its own website under the PullRequest umbrella.

Moonlight is based in Brooklyn, and had raised an unidentified pre-seed round before being acquired today. PullRequest, which is based in Austin, was a member of the Y Combinator Summer 2017 cohort. It raised a $2.3 million seed round in December, 2017 and another $8 million in April, 2018.

Getting tech right in Iowa and elsewhere requires insight into data, human behavior

What happened in Iowa’s Democratic caucus last week is a textbook example of how applying technological approaches to public sector work can go badly wrong just when we need it to go right.

While it’s possible to conclude that Iowa teaches us that we shouldn’t let tech anywhere near a governmental process, this is the wrong conclusion to reach, and mixes the complexity of what happened and didn’t happen. Technology won’t fix a broken policy and the key is understanding what it is good for.

What does it look like to get technology right in solving public problems? There are three core principles that can help more effectively build public-interest technology: solve an actual problem, design with and for users and their lives in mind and start small (test, improve, test).

Before developing an app or throwing a new technology into the mix in a political process it is worth asking: what is the goal of this app, and what will an app do that will improve on the existing process?

Getting it right starts with understanding the humans who will use what you build to solve an actual problem. What do they actually need? In the case of Iowa, this would have meant asking seasoned local organizers about what would help them during the vote count. It also means talking directly to precinct captains and caucus goers and observing the unique process in which neighbors convince neighbors to move to a different corner of a school gymnasium when their candidate hasn’t been successful. In addition to asking about the idea of a web application, it is critical to test the application with real users under real conditions to see how it works and make improvements.

In building such a critical game-day app, you need to test it under more real-world conditions, which means adoption and ease of use matters. While Shadow (the company charged with this build) did a lightweight test with some users, there wasn’t the runway to adapt or learn from those for whom the app was designed. The app may have worked fine, but that doesn’t matter if people didn’t use it or couldn’t download it.

One model of how this works can be found in the Nurse Family Partnership, a high-impact nonprofit that helps first-time, low-income moms.

This nonprofit has adapted to have feedback loops from its moms and nurses via email and text messages. It even has a full-time role “responsible for supporting the organization’s vision to scale plan by listening and learning from primary, secondary and internal customers to assess what can be done to offer an exceptional Nurse-Family Partnership experience.”

Building on its program of in-person assistance, the Nurse Family Partnership co-designed an app (with Hopelab, a social innovation lab in collaboration with behavioral-science based software company Ayogo). The Goal Mama app builds upon the relationship between nurses and moms. It was developed with these clients in mind after research showed the majority of moms in the program were using their smartphones extensively, so this would help meet moms where they were. Through this approach of using technology and data to address the needs of their workforce and clients, they have served 309,787 moms across 633 counties and 41 states.

Another example is the work of Built for Zero, a national effort focused on the ambitious goal of ending homelessness across 80 cities and counties. Community organizers start with the personal challenges of the unhoused — they know that without understanding the person and their needs, they won’t be able to build successful interventions that get them housed. Their work combines a methodology of human-centered organizing with smart data science to deliver constant assessment and improvements in their work, and they have a collaboration with the Tableau foundation to build and train communities to collect data with new standards and monitor progress toward a goal of zero homelessness.

Good tech always starts small, tests, learns and improves with real users. Parties, governments and nonprofits should expand on the learning methods that are common to tech startups and espoused by Eric Reis in The Lean Startup. By starting with small tests and learning quickly, public-interest technology acknowledges the high stakes of building technology to improve democracy: real people’s lives are at stake. With questions about equity, justice, legitimacy and integrity on the line, starting small helps ensure enough runway to make important changes and work out the kinks.

Take for example the work of Alia. Launched by the National Domestic Workers Alliance (NDWA), it’s the first benefits portal for house cleaners. Domestic workers do not typically receive employee benefits, making things like taking a sick day or visiting a doctor impossible without losing pay.

Its easy-to-use interface enables people who hire house cleaners to contribute directly to their benefits, allowing workers to receive paid time off, accident insurance and life insurance. Alia’s engineers benefited from deep user insights gained by connecting to a network of house cleaners. In the increasing gig economy, the Alia model may be instructive for a range of employees across local, state and federal levels. Obama organizers in 2008 dramatically increased volunteerism (up to 18%) just by A/B testing the words and colors used for the call-to-action on their website.

There are many instructive public interest technologies that focus on designing not just for the user. This includes work in civil society such as Center for Civic Design, ensuring people can have easy and seamless interactions with government, and The Principles for Digital Development, the first of which is “design with the user.” There is also work being done inside governments, from the Government Digital Service in the U.K. to the work of the United States Digital Service, which was launched in the Obama administration.

Finally, it also helps to deeply understand the conditions in which technology will be used. What are the lived experiences of the people who will be using the tool? Did the designers dig in and attend a caucus to see how paper has captured the moving of bodies and changing of minds in gyms, cafes and VFW halls?

In the case of Iowa, it requires understanding the caucuses norms, rules and culture. A political caucus is a unique situation.

Not to mention, this year the Iowa Caucus deployed several process changes to increase transparency but also complexify the process, which needed to also be taken into account when deploying a tech solution. Understanding the conditions in which technology is deployed requires a nuanced understanding of policies and behavior and how policy changes can impact design choices.

Building a technical solution without doing the user-research to see what people really need runs the risk of reducing credibility and further eroding trust. Building the technology itself is often the simple part. The complex part is relational. It requires investing in capacity to engage, train, test and iterate.

We are accustomed to same-day delivery and instantaneous streaming in our private and social lives, which raises our expectations for what we want from the public sector. The push to modernize and streamline is what leads to believing an app is the solution. But building the next killer app for our democracy requires more than just prototyping a splashy tool.

Public-interest technology means working toward the broader, difficult challenge of rebuilding trust in our democracy. Every time we deploy tech for the means of modernizing a process, we need to remember this end goal and make sure we’re getting it right.

Datometry snares $17M Series B to help move data and applications to the cloud

Moving data to the cloud from an on-prem data warehouse like Teradata is a hard problem to solve, especially if you’ve built custom applications that are based on the data. Datometry, a San Francisco startup, has developed a solution to solve that issue, and today it announced a $17 million Series B investment.

WRVI Capital led the round with participation from existing investors including Amarjit Gill, Dell Technologies Capital, Redline Capital and Acorn Pacific. The company has raised a total of $28 million, according to Crunchbase data.

The startup is helping move data and applications — lock, stock and barrel — to the cloud. For starters, it’s focusing on Teradata data warehouses and applications built on top of that because it’s a popular enterprise offering, says Mike Waas CEO and co-founder at the company.

“Pretty much all major enterprises are struggling right now with getting their data into the cloud. At Datometry, we built a software platform that lets them take their existing applications and move them over to new cloud technology as is, and operate with cloud databases without having to change any SQL or APIs,” Waas told TechCrunch.

Today, without Datometry, customers would have to hire expensive systems integrators and take months or years rewriting their applications, but Datometry says it has found a way to move the applications to the cloud, reducing the time to migrate from years to weeks or months, by using virtualization.

The company starts by building a new schema for the cloud platform. It supports all the major players including Amazon, Microsoft and Google. It then runs the applications through a virtual database running the schema and connects the old application with a cloud data warehouse like Amazon Redshift.

Waas sees virtualization as the key here as it enables his customers to run the applications just as they always have on prem, but in a more modern context. “Personally I believe that it’s time for virtualization to disrupt the database stack just the way it has disrupted pretty much everything else in the datacenter,” he said.

From there, they can start developing more modern applications in the cloud, but he says that his company can get them to the cloud faster and cheaper than was possible before, and without disrupting their operations in any major way.

Waas founded the company in 2013 and it took several years to build the solution. This is a hard problem to solve, and he was ahead of the curve in terms of trying to move this type of data. As his solution came online in the last 18 months, it turned out to be good timing as companies were looking suddenly for ways to move data and applications to the cloud.

He says he has been able to build a client base of 40 customers with 30 employees because the cloud service providers are helping with sales and walking them into clients, more than they can handle right now as a small startup.

The plan moving forward is to use some of the money from this round to build a partner network with systems integrators to help with implementation, so that they can concentrate on developing the product and supporting other data repositories in the future.

Radar, a location data startup, says its “big bet” is on putting privacy first

Pick any app on your phone, and there’s a greater than average chance that it’s tracking your location right now.

Sometimes they don’t even tell you. Your location can be continually collected and uploaded, then monetized by advertisers and other data tracking firms. These companies also sell the data to the government — no warrants needed. And even if you’re app-less, your phone company knows where you are at any given time, and for the longest time sold that data to anyone who wanted it.

Location data is some of the most personal information we have — yet few think much about it. Our location reveals where we go, when, and often why. It can be used to know our favorite places and our routines, and also who we talk to. And yet it’s spilling out of our phones ever second of every day to private companies, subject to little regulation or oversight, building up precise maps of our lives. Headlines sparked anger and pushed lawmakers into taking action. And consumers are becoming increasingly aware of their tracked activity thanks to phone makers, like Apple, alerting users to background location tracking. Foursquare, one of the biggest location data companies, even called on Congress to do more to regulate the sale of location data.

But location data is not going anywhere. It’s a convenience that’s just too convenient, and it’s an industry that’s growing from strength to strength. The location data market was valued at $10 billion last year, with it set to balloon in size by more than two-fold by 2027.

There is appetite for change, Radar, a location data startup based in New York, promised in a recent blog post that it will “not sell any data we collect, and we do not share location data across customers.”

It’s a promise that Radar chief executive Nick Patrick said he’s willing to bet the company on.

“We want to be that location layer that unlocks the next generation of experiences but we also want to do it in a privacy conscious way,” Patrick told TechCrunch. “That’s our big bet.”

Developers integrate Radar into their apps. Those app makers can create location geofences around their businesses, like any Walmart or Burger King. When a user enters that location, the app knows to serve relevant notifications or alerts, making it functionally just like any other location data provider.

But that’s where Patrick says Radar deviates.

“We want to be the most privacy-first player,” Patrick said. Radar bills itself as a location data software-as-a-service company, rather than an ad tech company like its immediate rivals. That may sound like a marketing point — it is — but it’s also an important distinction, Patrick says, because it changes how the company makes its money. Instead of monetizing the collected data, Radar prices its platform based on the number of monthly active users that use the apps with Radar inside.

“We’re not going to package that up into an audience segment and sell it on an ad exchange,” he said. “We’re not going to pull all of the data together from all the different devices that we’re installed on and do foot traffic analytics or attribution.”

But that trust doesn’t come easy, nor should it. Some of the most popular apps have lost the trust of their users through privacy-invasive privacy practices, like collecting locations from users without their knowledge or permission, by scanning nearby Bluetooth beacons or Wi-Fi networks to infer where a person is.

We were curious and ran some of the apps through a network traffic analyzer to see what was going on under the hood, like Joann, GasBuddy, Draft King and others. We found that Radar only activated when location permissions were granted on the device — something apps have tried to get around in the past. The apps we checked instantly sent our precise location data back to Radar — which was to be expected — along with the device type, software version, and little else. The data collected by Radar is significantly less than what other comparable apps share with their developers, but still allows integrations with third-party platforms to make use of that location data. Via, a popular ride-sharing app, uses a person’s location, collected by Radar, to deliver notifications and promotions to users at airports and other places of interest.

The company boasts its technology is used in apps on more than 100 million device installs.

“We see a ton of opportunity around enabling folks to build location, but we also see that the space has been mishandled,” said Patrick. “We think the location space in need of a technical leader but also an ethical leader that can enable the stuff in a privacy conscious way.”

It was a convincing pitch for Radar’s investors, which just injected $20 million into its Series B fundraise, led by Accel, a substantial step up from its $8 million Series A round. Patrick said the round will help the company build out the platform further. One feature on Radar’s to-do list was to allow the platform to take advantage of on-device processing, “no user event data ever touches Radar’s servers,” he aid Patrick. The raise will help the company expand its physical footprint on the west coast by opening an office in San Francisco. Its home base in New York will also expand, he said, increasing the company’s headcount from its current two-dozen employees.

“Radar stands apart due to its focus on infrastructure rather than ad tech,” said Vas Natarajan, a partner at Accel, who also took a seat on Radar’s board.

Two Sigma Ventures, Heavybit, Prime Set, and Bedrock Capital participated in the round.

Patrick said his pitch is also working for apps and developers, which recognize that their users are becoming more aware of privacy issues. He’s seen companies, some of which he now calls customers, that are increasingly looking for more privacy-focused partners and vendors, not least to bolster their own respective reputations.

It’s healthy to be skeptical. Given the past year, it’s hard to have any faith in any location data company, let alone embrace one. And yet it’s a compelling pitch for the app community that only through years of misdeeds and a steady stream of critical headlines is being forced to repair its image.

But a company’s words are only as strong as its actions, and only time will tell if they hold up.

Develop a serious cybersecurity strategic plan that incorporates CCM

It’s a new year and corporate concerns about cybersecurity risk are high. Which means top executives at Fortune 500 companies will do what they always do — spend big on security technology. Global cybersecurity spending is on a path to exceed $1 trillion cumulatively over the five-year period from 2017 to 2021.

But increasing budgets each year with little strategic forethought is a corporate failing. Further, the lack of proactive monitoring of cyber risk profile almost ensures gaps and vulnerabilities that will be exploited by hackers.

Corporations that don’t formulate a thorough cybersecurity plan and monitor its implementation will encounter more breaches and increasingly become mired in scuttled M&A opportunities. Market research firm Gartner says that 60% of organizations engaging in M&A activity are already weighing a target’s cybersecurity track record, posture and strategy as a key factor in their due diligence. A company that has been hacked is a less attractive acquisition target — hardly a minor point, given that M&A activity globally, led by the U.S., has set records in recent years and is widely expected to maintain or exceed this level going forward.

The most highly publicized example of an M&A-related cybersecurity headache was Verizon’s discovery of a prior data breach at Yahoo a couple of years ago, after formulating an acquisition agreement. The discovery almost killed the deal and ultimately resulted in a $350 million reduction in Verizon’s purchase price.

Enterprises must step up to the plate once and for all and develop meaningful metrics to assess the quality of their cybersecurity protection and monitor its completeness and effectiveness. And the best way to do this is to begin taking steps to incorporate continuous controls monitoring (CCM).

Fb Workplace co-founder launches downtime fire alarm Kintaba

“It’s an open secret that every company is on fire” says Kintaba co-founder John Egan. “At any given moment something is going horribly wrong in a way that it has never gone wrong before.” Code failure downtimes, server outages, and hack attacks plague engineering teams. Yet the tools for waking up the right employees, assembling a team to fix the problem, and doing a post-mortem to assess how to prevent it from happening again can be as chaotic as the crisis itself.

Text messages, Slack channels, task managers, and Google Docs aren’t sufficient for actually learning from mistakes. Alerting systems like PagerDuty focus on the rapid response, but not the educational process in the aftermath. Finally there’s a more holistic solution to incident response with today’s launch of Kintaba.

The Kintaba team experienced these pains first hand while working at Facebook after Egan and Zac Morris’ Y Combinator-backed data transfer startup Caffeinated Mind was acqui-hired in 2012. Years later when they tried to build a blockchain startup and the whole stack was constantly in flames, they longed for a better incident alert tool. So they built one themselves and named it after the Japanese art of Kintsugi, where gold is used to fill in cracked pottery “which teaches us to embrace the imperfect and to value the repaired” Egan says.

With today’s launch, Kintaba offers a clear dashboard where everyone in the company can see what major problems have cropped up, plus who’s responding and how. Kintaba’s live activity log  and collaboration space for responders let them debate and analyze their mitigation moves. It integrates with Slack, and lets team members subscribe to different levels of alerts or search through issues with categorized hashtags.

“The ability to turn catastrophes into opportunities is one of the biggest differentiating factors between successful and unsuccessful teams and companies” says Egan. That’s why Kintaba doesn’t stop when your outage does.

Kintaba Founders (from left): John Egan Zac Morris Cole Potrocky

As the fire gets contained, Kintaba provides a rich text editor connected to its dashboard for quickly constructing a post-mortem of what went wrong, why, what fixes were tried, what worked, and how to safeguard systems for the future. Its automated scheduling assistant helps teams plan meetings to internalize the post-mortem.

Kintaba’s well-pedigreed team and their approach to an unsexy but critical software-as-a-service attracted $2.25 million in funding led by New York’s FirstMark Capital.

“All these features add up to Kintaba taking away all the annoying administrative overhead and organization that comes with running a successful modern incident management practice” says Egan, “so you can focus on fixing the big issues and learning from the experience.”

Egan, Morris and Cole Potrocky met while working at Facebook, which is known for spawning other enterprise productivity startups based on its top-notch internal tools. Facebook co-founder Dustin Moskovitz built a task management system to reduce how many meetings he had to hold, then left to turn that into Asana which filed to go public this week.

The trio had been working on internal communication and engineering tools as well as the procedures for employing them. “We saw first hand working at companies like Facebook how powerful those practices can be and wanted to make them easier for anyone to implement without having to stitch a bunch of tools together” Egan tells me. He stuck around to co-found Facebook’ enterprise collaboration suite Workplace while Potrocky built engineering architecture there and Morris became a mobile security lead at Uber.

Like many blockchain projects, Kintaba’s predecessor, crypto collectibles wallet Vault, proved an engineering nightmare without clear product market fit. So the team ditched it, pivoted to build out the internal alerting tool they’d been tinkering with. That origin story sounds a lot like Slack’s, which began as a gaming company that pivoted to turn its internal chat tool into a business.

So what’s the difference between Kintaba and just using Slack and email or a monitoring tool like PagerDuty, Splunk’s VictorOps, or Atlassian’s OpsGenie? Here’s how Egan breaks a sit downtime situation handled with Kintaba:

“You’re on call and your pager is blowing up because all your servers have stopped serving data. You’re overwhelmed and the root cause could be any of the multitude of systems sending you alerts. With Kintaba, you aren’t left to fend for yourself. You declare an incident with high severity and the system creates a collaborative space that automatically adds an experienced IMOC (incident manager on call) along with other relevant on calls. Kintaba also posts in a company-wide incident Slack channel. Now you can work together to solve the problem right inside the incident’s collaborative space or in Slack while simultaneously keeping stakeholders updated by directing them to the Kintaba incident page instead of sending out update emails. Interested parties can get quick info from the stickied comments and #tags. Once the incident is resolved, Kintaba helps you write a postmortem of what went wrong, how it was fixed, and what will be done to prevent it from happening. Kintaba then automatically distributes the postmortem and sets up an incident review on your calendar.”

Essentially, instead of having one employee panicking about what to do until the team struggles to coordinate across a bunch of fragmented messaging threads, a smoother incident reporting process and all the discussion happens in Kintaba. And if there’s a security breach that a non-engineer notices, they can launch a Kintaba alert and assemble the legal and PR team to help too.

Alternatively, Egan describes the downtime  fiascos he’d experience without Kintaba like this:

The on call has to start waking up their management chain to try and figure out who needs to be involved. The team maybe throws a Slack channel together but since there’s no common high severity incident management system and so many teams are affected by the downtime, other teams are also throwing slack channels together, email threads are happening all over the place, and multiple groups of people are trying to solve the problem at once. Engineers begin stepping all over each other and sales teams start emailing managers demanding to know what’s happening. Once the problem is solved, no one thinks to write up a postmortem and even if they do it only gets distributed to a few people and isn’t saved outside that email chain. Managers blame each other and point fingers at people instead of taking a level headed approach to reviewing the process that led to the failure. In short: panic, thrash, and poor communication.

While monitoring apps like PagerDuty can do a good job of indicating there’s a problem, they’re weaker at the collaborative resolution and post-mortem process, and designed just for engineers rather than everyone like Kintaba. Egan says “It’s kind of like comparing the difference between the warning lights on a piece of machinery and the big red emergency button on a factory floor.  We’re the big red button . . . That also means you don’t have to rip out PagerDuty to use Kintaba” since it can be the trigger that starts the Kintaba flow.

Still, Kintaba will have to prove that it’s so much better than a shared Google Doc, an adequate replacement for monitoring solutions, or a necessary add-on that companies should pay $12 per user per month. PagerDuty’s deeper technical focus helped it go public a year ago, though it’s fallen about 60% since to a market cap of $1.75 billion. Still, customers like Dropbox, Zoom, and Vodafone rely on its SMS incident alerts, while Kintaba’s integration with Slack might not be enough to rouse coders from their slumber when something catches fire.

If Kintaba can succeed in incident resolution with today’s launch, the four-person team sees adjacent markets in task prioritization, knowledge sharing, observability, and team collaboration, though those would pit it against some massive rivals. If it can’t, perhaps Slack or Microsoft Teams could be suitable soft landings for Kintaba, bringing more structured systems for dealing with major screwups to their communication platforms.

When asked why he wanted to build a legacy atop software that might seem a bit boring on the surface, Egan concluded that “Companies using Kintaba should be learning faster than their competitors . . . Everyone deserves to work within a culture that grows stronger through failure.”

Scaleway launches block storage

Cloud hosting company Scaleway is adding a new service today — block storage. Consumers will be able to purchase additional storage, attach that volume to a cloud instance and use it for a database for instance.

Block storage works pretty much like plugging an external hard drive to your laptop. You get a ton of free space that you can use with your apps on your server. Compared to object storage, it is particularly useful if you need to constantly read and write data to a database for instance.

Cloud servers usually come with local storage, but you might reach the limit even though you still have a lot of headroom when it comes to CPU and RAM. Additionally, if you delete your cloud instance, your block storage is still available and you can attach it to another server. You can manage your volumes in the admin interface, using the Scaleway API or standard devops tools, such as Terraform.

Scaleway is also working on a managed Kubernetes service to deploy containerized applications directly. That service is also going to take advantage of block storage.

As for specifications, you can create a volume on any size between 1GB and 1TB. Starting in April, you’ll also be able to scale up your volume without having to detach your volume from your instance. You can also create as many as 15 different volumes for one instance.

Every volume is replicated three times and you can create snapshots. Scaleway uses SSDs and can handle 5,000 input/output operations per second.

The company charges €0.08 per GB per month. For instance, a 50GB volume will cost you €4 per month. In the future, there will be a more expensive tier at €0.12 per GB per month with performance of 10,000 input/output operations per second.

Scaleway says that its solution is both cheaper and more efficient than what Google and Amazon offer:

Deepnote raises $3.8M to build a better data science platform

Deepnote, a startup that offers data scientists an IDE-like collaborative online experience for building their machine learning models, today announced that it has raised a $3.8 million seed round led by Index Ventures and Accel, with participation from YC and Credo Ventures, as well as a number of angel investors, including OpenAI’s Greg Brockman, Figma’s Dylan Field, Elad Gil, Naval Ravikant, Daniel Gross and Lachy Groom.

Built around standard Jupyter notebooks, Deepnote wants to provide data scientists with a cloud-based platform that allows them to focus on their work by abstracting away all of the infrastructure. So instead of having to spend a few hours setting up their environment, a student in a data science class, for example, can simply come to Deepnote and get started.

In its current form, Deepnote doesn’t charge for its service, despite the fact that it allows its users to work with large data sets and train their models on cloud-based machines with attached GPUs.

As Deepnote co-founder and CEO (and ex-Mozilla engineer) Jakub Jurových told me, though, he believes that the most important feature of the service is its ability to allow users to collaborate. “Over the past couple of years, I started to do a lot of data science work and helped a couple of companies scale up their data science teams,” he said. “And again and again, we run into the same issue: people have real trouble collaborating.”

Jurových argues that while it’s easy enough to keep two or three data scientists in sync, once you have a bigger team, you quickly run into issues because the current set of tools was never meant to do this kind of work. “If I’m a data scientist by training, I spend most of my time doing math and stats,” he said. “But then, expecting me to connect to an EC2 cluster and spin a bunch of GPU instances for parallel training is just not something I’m looking for.”

When it started this project in early 2019, the Deepnote team decided to put Jupyter notebooks at the core of the user experience. That is, after all, what most data scientists are already familiar with. It then built the collaborative features around that, as well as tools for pulling in data from third-party services and scheduling tools for kicking off jobs inside of the platform at regular intervals.

Deepnote is already quite popular with students. Jurových also noted that a lot of teachers already use Deepnote to publish interactive exercises for their students. Over time, the company obviously wants to bring more businesses on board, but for the time being, it is mostly focused on building its product. Given its collaborative nature, the team also believes that the service will naturally grow through word of mouth as people invite others to collaborate on products.

“Data science is overdue for the benefits of tools that are cloud and collaboration native,” said Accel partner Vas Natarajan. “This is a fast-growing, dynamic market that’s demanding a successor to incumbent tools. Jakub and his team are building powerful software to modernize data science workflow for teams.”

The new funding will mostly go into hiring and building out the product, with a focus on the overall user experience. Even within the data science community, there are a variety of use cases, after all, and an NLP engineer has different needs from a computer vision engineer.

Datree announces $8M Series A as it joins Y Combinator

Datree, the early stage startup building a DevOps policy engine on GitHub, announced an $8 million Series A today. It also announced it has joined the Y Combinator Winter 20 cohort.

Blumberg and TLV Partners led the round with participation from Y Combinator . The company has now raised $11 million with the $3 million seed round announced in 2018.

Since that seed round, company co-founder and CEO Shimon Tolts says that the company learned that while scanning code for issues was something DevOps teams found useful, they wanted help defining the rules. So Datree has created a series of rules packages you can run against the code to find any gaps or issues.

“We offer development best practices, coding standards and security and compliance policies. What happens today is that, as you connect to Datree, we connect to your source code and scan the entire code base, and we recommend development best practices based on your technology stack,” Tolts explained.

He says that they build these rules packages based on the company’s own expertise, as well as getting help from the community, and in some cases partnering with experts. For instance, for its Docker security package, it teamed up with Aqua Security.

The focus remains on applying these rules in GitHub where developers are working. Before committing the code, they can run the appropriate rules packages against it to ensure they are in compliance with best practices.

Datree rules packages. Screenshot: Datree

Tolts says they began looking at Y Combinator after the seed round because they wanted more guidance on building out the business. “We knew that Y Combinator could really help us because our product is relevant to 95% of all YC companies, and the program has helped us go and work on six figure deals with more mature YC companies,” he said.

Datree is working directly with Y Combinator CEO Michael Seibel, and he says being part of the Winter 20 cohort has helped him refine his go-to-market motion. He admits he is not a typical YC company having been around since 2017 with an existing product and 12 employees, but he thinks it will help propel the company in the long run.