Forerunner is software for NFIMBYs, or no flooding in my backyard

Mayors have the toughest job in the world, and leading a city is only getting harder. Even as populations swell in urban cores across the world, climate change is constraining the geographies where that growth can happen. Coastal communities which are popular with residents are also taking a gamble when it comes to rising sea levels. How do you tradeoff a need for growth with the requirement for protecting residents from disaster?

In most cases, the pendulum is fully tilted toward growth. Coastal towns continue to allow widespread sprawl and development, chasing ever more property taxes and residents even as sea levels get ever more uncomfortably high. It’s a recipe for disaster — and one that many cities have chosen to bake anyway.

Forerunner wants that pendulum to swing the other way. Its platform allows city planners and building managers to survey, investigate and enforce stricter building codes and land use standards with a focus on mitigating future flood damage. It’s particularly focused on American cities with heavy usage of the federal flood insurance program, and Forerunner helps cities maximize their adherence to that program’s byzantine rules.

The company pulls in data from FEMA and other sources to determine a property’s mandatory lowest floor height requirement, and whether the property conforms to that rule. It also tracks flood zone boundaries and helps with the administrative overhead of processing federal flood insurance documentation, such as creating and managing elevation certificates.

Co-founders JT White and Susanna Pho have been friends for years and worked at the MIT Media Lab before eventually coming together in early 2019 to build out this floodplain management product. “It cannot be underscored enough that a lot of communities just don’t follow [federal flood] regulations,” Pho said. “They will revert their ordinances from something more strict … since they can’t do a lot of day-to-day compliance.”

Coastal cities devastated by floods are protected by federal flood insurance, but that often creates a moral hazard: since damage is paid for, there isn’t much incentive to avoid it in the first place. The federal government is attempting to tighten those standards, and there is also a sense among a new generation of city planners and municipal leaders that the build-devastation-rebuild model of many cities needs to stop given climate change. After flooding, “we want to see communities rebuild to higher standards,” White said. “The sort of cycle of rebuilding and doing the same thing over and over again is infuriating to us.”

Transitioning to a new model isn’t easy of course. “There are a lot of hard decisions that these communities must make,” he said, but “our software makes it a bit easier to do these things.” So far, the company has gotten early traction with 33 communities currently using Forerunner according to the founders.

Although it has customer clusters in Louisiana and northern New Jersey, the company’s largest customer is Harris County, which includes much of the Houston, Texas metro area. The county could potentially save $5 million on their flood insurance premiums with better adherence to federal standards, according to White. “One of the benefits of our product is that we can help you protect and increase this immediate discount to every flood insurance policyholder in your community starting next year,” he said. Ultimately though, FEMA focuses on disincentives rather than incentives. “The biggest stick that FEMA has is that it can suspend communities from the flood insurance program,” he noted.

The company raised an early seed round in 2019, and has been focused on building up the platform’s capabilities and getting the sales flywheel spinning — which can be a tough order in the govtech space.

Even as demand intensifies for more housing and growth, climate change is simultaneously placing its own demands on cities. Mayors and city leaders are increasingly going to have to transition from the growth models of the past to the resilient models of the future.

BreezoMeter, which powers air quality in Apple’s Weather app, launches Wildfire Tracker

BreezoMeter has been on a mission to make environmental health hazards accessible to as many people as possible. Through its air quality index (AQI) calculations, the Israel-based company can now identify the quality of air down to a few meters in dozens of countries. A partnership with Apple to include its data into the iOS Weather app along with its own popular apps delivers those metrics to hundreds of millions of users, and an API product allows companies to tap into its dataset for their own purposes.

Right on the heels of a $30 million Series C round a few weeks ago, the company is radially expanding its product from air quality into the real-time detection of wildfire perimeters with its new product, Wildfire Tracker.

The new product will take advantage of the company’s fusion of sensor data, satellite imagery, and local eyewitness reports to be able to identify the edges of wildfires in real-time. “People expect accurate wildfire information just as they expect accurate weather or humidity data,” Ran Korber, CEO and co-founder, said. “It has an immediate effect on their life.” He added further that BreezoMeter wants to “try to connect the dots between climate tech and human health.”

Fire danger zones will be indicated with polygonal boundaries marked in red, and as always, air quality data will be viewable in these zones and in surrounding areas.

BreezoMeter’s air quality maps can show the spread of wildfire pollution easily. Image Credits: BreezoMeter.

Korber emphasized that getting these perimeters accurate across dozens of countries was no easy feat. Sensors can be sparse, particularly in the forests where wildfires ignite. Meanwhile, satellite data that focuses on thermal imaging can be fooled. “We’re looking for abnormalities … many of the times you have these false positives,” Korber said. He gave an example of a large solar panel array which can look very hot with thermal sensors but obviously isn’t a fire.

The identified fire perimeters will be available for free to consumers on BreezoMeter’s air quality map website, and will shortly come to the company’s apps as well. Later this year, these perimeters will be available from the company’s APIs for commercial customers. Korber hopes the API endpoints will give companies like car manufacturers the ability to forewarn drivers that they are approaching a conflagration.

The new feature is just a continuation of BreezoMeter’s long-time expansion of its product. “When we started, it was just air quality … and only forecasting air pollution in Israel,” Korber said. “Almost every year since then, we expanded the product portfolio to new environmental hazards.” He pointed to the addition of pollen in 2018 and the increasingly global nature of the app.

Wildfire detection is an, ahem, hot area these days for VC investors. For example, Cornea is a startup focused on helping firefighters identify and mitigate blazes, while Perimeter wants to help identify boundaries of wildfires and give explicit evacuation instructions complete with maps. As Silicon Valley’s home state of California and much of the world increasingly become a tinderbox for fires, expect more investment and products to enter this area.

FloodMapp wants to predict where water goes before it washes away your home

Floods are devastating. They rip asunder communities, wipe out neighborhoods, force the evacuation of thousands of people every year, and recovering them can take years — assuming recovery is possible at all. The U.S. government estimates that floods in recent decades (exclusive of hurricanes and tropical storms) have caused an estimated $160 billion in damage and killed hundreds of people.

One would think that we should have a real-time model for where water is and where it is going around the world, what with all of those sensors on the ground and satellites in orbit. But we mostly don’t, instead relying on antiquated models that fail to take into account the possibilities of big data and big compute.

FloodMapp, a Brisbane, Australia-based startup, is aiming to wash out the old approaches to hydrology and predictive analytics and put in place a much more modern approach to help emergency managers and citizens know when the floods are coming — and what to do.

CEO and co-founder Juliette Murphy has spent a lifetime in the water resources engineering field, and saw first hand the heavy destruction that water can cause. In 2011, she watched as her friend’s home was submerged in the midst of terrible flooding. The “water went right over the peak of her house,” she said. Two years later in Calgary, she saw the same situation again: floods and fear as friends tried to determine whether and how to evacuate.

Those memories and her own professional career led her to think more about how to build better tools for disaster managers. She ultimately synced up with CTO and co-founder Ryan Prosser to build FloodMapp in 2018, raising $1.3 million AUD along with a matching grant.

The company’s premise is simple: we have the tools to build real-time flooding models today, but we just have chosen not to take advantage of them. Water follows gravity, which means that if you know the topology of a place, you can predict where the water will flow to. The challenge has been that calculating second-order differential equations at high resolution remains computationally expensive.

Murphy and Prosser decided to eschew the traditional physics-based approach that has been popular in hydrology for decades for a completely data-based approach that takes advantage of widely available techniques in machine learning to make those calculations much more palatable. “We do top down what used to be bottoms up,” Murphy said. “We have really sort of broken the speed barrier.” That work led to the creation of DASH, the startup’s real-time flood model.

FloodMapp’s modeling of the river flooding in Brisbane. Image Credits: FloodMapp

Unlike typical tech startups though, FloodMapp isn’t looking to be its own independent platform. Instead, it interoperates with existing geographic information systems (GIS) like ESRI’s ArcGIS by offering a data layer that can be combined with other data streams to provide situational awareness to emergency response and recovery personnel. Customers pay a subscription fee for access to FloodMapp’s data layer, and so far, the company is working with the Queensland Fire and Emergency Services in Australia as well as the cities of Norfolk and Virginia Beach in Virginia.

But it’s not just emergency services the startup is ultimately hoping to attract. Any company with physical assets, from telcos and power companies to banks and retail chains with physical stores could potentially be a customer of the product. In fact, FloodMapp is betting that the SEC will mandate further climate change financial disclosures, which could lead to a … flood of new business (I get one flood pun, okay, I get one).

FloodMapp’s team has expanded from its original two founders to a whole crop of engineering and sales personnel. Image Credits: FloodMapp

Murphy notes that “we are still in our early stages” and that the company is likely to raise further financing early next year as it gets through this year’s flood season and onboards several new customers. She hopes that ultimately, FloodMapp will “not only help people, but help our country change and adapt in the face of a changing climate.”

FEMA just tested the US national emergency alert system

emergency alert

FEMA will test its national emergency alert system later this week. Image Credits: Getty Images

Did you hear it? FEMA just ran its first nationwide test of the U.S. emergency alert system since the pandemic.

The Federal Emergency Management Agency, or FEMA, tested both the Emergency Alert System (EAS), which broadcasts an emergency tone and message on televisions and radios, and the Wireless Emergency Alerts (WEA), a newer system that sends emergency notifications to smartphones. This was the second nationwide test of the WEA after its debut in 2018, and the first test for all U.S. cell phones of users who chose to opt-in to receive test alerts.

The test began around 2:20 p.m. ET. If you opted-in to the test, you likely got a message on your phone that said: “THIS IS A TEST of the National Wireless Emergency Alert System. No action is needed.” (The FCC explains how to opt-in to test alerts.)

For the first time, the WEA test sent the same test message in Spanish to phones that have Spanish set as the default language.

This is what the test WEA emergency alert looks like. Image Credits: WA Emergency Management (opens in a new window)

Since the last nationwide test in 2019, FEMA said it has improved WEA to send longer, detailed messages to the majority of phones that support it. The update also allows authorities to include tappable links, like web addresses.

FEMA runs these tests every year or two to ensure the system is working properly. It’s no small task: A national emergency alert system designed to broadcast the same message to potentially hundreds of millions of people at any given time is fraught with technological hurdles that require close co-operation from the cell carriers and broadcast networks.

The EAS system has been around since the late 1990s, but WEA was developed more recently as more Americans rely on their phones. WEA alerts, like EAS alerts, are designed to be sent by local and state authorities for public safety alerts, missing children and imminent threats, such as severe weather. More recently, FEMA rolled out “presidential alerts,” which are supposed to be sent to every phone in the U.S. in the event of a national emergency. Presidential alerts, unlike other alerts, can be issued by the sitting president for any reason, and Americans cannot opt out.

WEA broadcasts emergency notifications through the cell towers of an affected area — such as an area about to be hit by a storm — rather than sending tens of millions of text messages, which would grind the cell networks to a halt. The alerts are created by local, state or federal authorities and are authenticated by FEMA through the Integrated Public Alert & Warning System, or IPAWS, and then passed to cell carriers to deliver the emergency alert.

The emergency alert system, though, is far from perfect. In 2018, an erroneous alert sent to Hawaii residents warned of an imminent ballistic missile threat,” and that “this is not a drill.” Minutes later, the alert was canceled. The false warning came as tensions between the U.S. and North Korea were at an all-time high, during which Pyongyang was regularly test-firing rockets used for its nuclear weapons program.

Security experts have also long warned that the EAS systems pose security risks. Last year, researchers found dozens of internet-connected, special-purpose servers used by television and radio stations to interrupt their broadcasts to relay an emergency alert, which they said could allow a hacker to break in and compromise the servers.

The human-focused startups of the hellfire

Disasters may not always be man-made, but they are always responded to by humans. There’s a whole panoply of skills and professions required today to respond to even the tiniest emergency, and that doesn’t even include the needs during pre-disaster planning and post-disaster recovery. It’s not a very remunerative industry for most and the mental health effects from stress can linger for decades, but the mission at the core of this work — to help people in the time of their greatest need — is what continues to attract many to partake in this never-ending battle anyway.

In the last three parts of this series on the future of technology and disaster response, I’ve focused on, well, technology, and specifically the sales cycle for new products, the sudden data deluge now that Internet of Things (IoT) is in full force, and the connectivity that allows that data to radiate all around. What we haven’t looked at enough so far is the human element: the people who actually respond to disasters as well as what challenges they face and how technology can help them.

So in this fourth and final part of the series, we’ll look at four areas where humans and technology intersect within disaster response and what future opportunities lie in this market: training and development, mental health, crowdsourced responses to disasters, and our doomsday future of hyper-complex emergencies.

Training in a hellfire

Most fields have linear approaches to training. To become a software engineer, students learn some computer science theory, add in some programming practice, and voilà (note: your mileage may vary). To become a medical doctor, aspiring physicians take an undergraduate curriculum teeming with biology and chemistry, head to medical school for two deadened years of core anatomy and other classes and then switch into clinical rotations, a residency, and maybe fellowships.

But how do you train someone to respond to emergencies?

From 911 call takers to EMTs and paramedics to emergency planning officials and the on-the-ground responders who are operating in the center of the storm as it were, there are large permutations in the skills required to do these jobs well. What’s necessary aren’t just specific hard skills like using call dispatch software or knowing how to upload video from a disaster site, but also critically-important softer skills as well: precisely communicating, having sangfroid, increasing agility, and balancing improvisation with consistency. The chaos element also can’t be overstated: every disaster is different, and these skills must be viscerally recombined and exercised under extreme pressure with frequently sparse information.

A whole range of what might be dubbed “edtech” products could serve these needs, and not just exclusively for emergency management.

Communications, for instance, isn’t just about team communications, but also communicating with many different constituencies. Aaron Clark-Ginsberg, a social scientist at RAND Corporation, said that “a lot of these skills are social skills — being able to work with different groups of people in culturally and socially appropriate ways.” He notes that the field of emergency management has heightened attention to these issues in recent years, and “the skillset we need is to work with those community structures” that already exist where a disaster strikes.

As we’ve seen in the tech industry the last few years, cross-cultural communication skills remain scarce. One can always learn this just through repeated experiences, but could we train people to develop empathy and understanding through software? Can we develop better and richer scenarios to train emergency responders — and all of us, really — on how to communicate effectively in widely diverging conditions? That’s a huge opportunity for a startup to tackle.

Emergency management is now a well-developed career path. “The history of the field is very fascinating, [it’s] been increasingly professionalized, with all these certifications,” Clark-Ginsberg said. That professionalization “standardizes emergency response so that you know what you are getting since they have all these certs, and you know what they know and what they don’t.” Certifications can indicate singular competence, but perhaps not holistic assessment, and it’s a market that offers opportunities for new startups to create better assessments.

Like many of us, responders get used to doing the same thing over and over again, and that can make training for new skills even more challenging. Michael Martin of emergency data management platform RapidSOS describes how 911 call takers get used to muscle memory, “so switching to a new system is very high-risk.” No matter how bad existing software interfaces are, changing them will very likely slow every single response down while increasing the risk of errors. That’s why the company offers “25,000 hours a year for training, support, integration.” There remains a huge and relatively fragmented market for training staff as well as transitioning them from one software stack to another.

Outside these somewhat narrow niches, there is a need for a massive renaissance in training in this whole area. My colleague Natasha Mascarenhas recently wrote an EC-1 on Duolingo, an app designed to gamify and entrance students interested in learning second languages. It’s a compelling product, and there is no comparative training system for engaging the full gamut of first responders.

Art delaCruz, COO and president of Team Rubicon, a non-profit which assembles teams of volunteer military veterans to respond to natural disasters, said that it’s an issue his organization is spending more time thinking about. “Part of resilience is education, and the ability to access information, and that is a gap that we continue to close on,” he said. “How do you present information that’s more simple than [a learning management system]?” He described the need for “knowledge bombs like flash cards” to regularly provide responders with new knowledge while testing existing ideas.

There’s also a need to scale up best practices rapidly across the world. Tom Cotter, director of emergency response and preparedness at Project Hope, a non-profit which empowers local healthcare workers in disaster-stricken and impoverished areas, said that in the context of COVID-19, “a lot of what was going to be needed [early on] was training — there were huge information gaps at the clinical level, how to communicate it at a community level.” The organization developed a curriculum with Brown University’s Watson Institute in the form of interactive PowerPoints that were ultimately used to train 100,000 healthcare workers on the new virus, according to Cotter.

When I look at the spectrum of edtech products existing today, one of the key peculiarities is just how narrow each seems to focus. There are apps for language learning and for learning math and developing literacy. There are flash card apps like Anki that are popular among medical students, and more interactive approaches like Labster for science experiments and Sketchy for learning anatomy.

Yet, for all the talk of boot camps in Silicon Valley, there is no edtech company that tries to completely transform a student in the way that a bona fide boot camp does. No startup wants to holistically develop their students, adding in hard skills while also advancing the ability to handle stress, the improvisation needed to confront rapidly-changing environments, and the skills needed to communicate with empathy.

Maybe that can’t be done with software. Maybe. Or perhaps, no founder has just had the ambition so far to go for broke — to really revolutionize how we think about training the next generation of emergency management professionals and everyone else in private industry who needs to handle stress or think on their feet just as much as frontline workers.

That’s the direction where Bryce Stirton, president and co-founder of public-safety company Responder Corp, has been thinking about. “Another area I am personally a fan of is the training space around VR,” he said. “It’s very difficult to synthesize these stressful environments,” in areas like firefighting, but new technologies have “the ability to pump the heart that you need to experience in training.” He concludes that “the VR world, it can have a large impact.”

Healing after disaster

When it comes to trauma, few fields face quite the challenge as emergency response. It’s work that almost by definition forces its personnel to confront some of the most harrowing scenes imaginable. Death and destruction are given, but what’s not always accounted for is the lack of agency in some of these contexts for first responders — the family that can’t be saved in time so a 911 call taker has to offer final solace, or the paramedics who don’t have the right equipment even as they are showing up on site.

Post-traumatic stress is perhaps the most well-known and common mental health condition facing first responders, although it is hardly the only one. How to ameliorate and potentially even cure these conditions represents a burgeoning area of investment and growth for a number of startups and investors.

Risk & Return, for instance, is a venture firm heavily focused on companies working on mental health as well as human performance more generally. In my profile of the firm a few weeks ago, managing director Jeff Eggers said that “We love that type of technology since it has that dual purpose: going to serve the first responder on the ground, but the community is also going to benefit.”

Two examples of companies from its portfolio are useful here to explore as examples of different pathways in this category. The first is Alto Neuroscience, which is a stealthy startup founded by Amit Etkin, a multidisciplinary neuroscientist and psychiatrist at Stanford, to create new clinical treatments to post-traumatic stress and other conditions based on brainwave data. Given its therapeutic focus, it’s probably years before testing and regulatory approvals come through, but this sort of research is on the cutting-edge of innovation here.

The second company is NeuroFlow, which is a software startup using apps to guide patients to better mental health outcomes. Through persistent polling, testing, and collaboration with practitioners, the company’s tools allow for more active monitoring of mental health — looking for emerging symptoms or relapses in even the most complicated cases. NeuroFlow is more on the clinical side, but there are obviously a wealth of wellness startups that have percolated in recent years as well like Headspace and Calm.

Outside of therapeutics and software though, there are entirely new frontiers around mental health in areas like psychedelics. That was one of the trends I called out as a top five area for investment in the 2020s earlier this year, and I stand by that. We’ve also covered a startup called Osmind which is a clinical platform for managing patients with a psychedelic focus.

Risk & Return itself hasn’t made an investment in psychedelics yet, but Bob Kerrey, the firm’s board chairman and the former co-chair of the 9/11 Commission as well as former governor and senator of Nebraska, said that “it’s difficult to do this if you are the government, but easier to do this in the private sector.”

Similar to edtech, mental health startups might get their start in the first responder community, but they are hardly limited to this population. Post-traumatic stress and other mental health conditions affect wide swaths of the world’s population, and solutions that work in one community can often translate more broadly to others. It’s a massive, massive market, and one that could potentially transform the lives of millions of people for the better.

Before moving on, there’s one other area of interest here, and that is creating impactful communities for healing. First responders and military veterans experience a mission and camaraderie in their service that they often lack once they are in new jobs or on convalescence. DelaCruz of Team Rubicon says that one of the goals of bringing veterans to help in disaster regions is that the veterans themselves “reconnect with identity and community — we have these incredible assets in these men and women who have served.” It’s not enough to just find a single treatment per patient — we oftentimes need to zoom out to the wider population to see how mental health ripples out.

Helping people find purpose may not be the easiest challenge to solve as a startup, but it’s certainly a major challenge for many, and an area fermenting with new approaches now that the the social networking wave has reached its nadir.

Crowdsourcing disaster response

Decentralization has been all the rage in tech in recent years — just mention the word blockchain in a TechCrunch article to get at least 50 PR emails about the latest NFT for a toilet stain. While there is obviously a lot of noise, one area where substance may pan out well is in disaster response.

If the COVID-19 pandemic showed anything, it was the power of the internet to aggregate as well as verify data, build dashboards, and deliver highly-effective visualizations of complex information for professionals and laypeople alike. Those products were developed by people all around the world often from the comfort of their own homes, and they demonstrate how crowds can quickly draft serious labor to help respond to crises as they crop up.

Jonathan Sury, project director at the National Center for Disaster Preparedness at the Earth Institute at Columbia University, said that “COVID has really blown so much of what we think about out of the water.” With so many ways to collaborate online right now, “that’s what I would say is very exciting … and also practical and empowering.”

Clark-Ginsberg of RAND calls it the “next frontier of disaster management.” He argues that “if you can use technology to broaden the number of people who can participate in disaster management and respond to disasters,” then we might be reaching an entirely new paradigm for what effective disaster response will look like. “Formal structures [for professional frontline workers] have strengthened and that has saved lives and resources, but our ability to engage with everyday responders is still something to work on.”

Many of the tools that underpin these crowdsourced efforts don’t even focus on disasters. Sury pointed to Tableau and data visualization platform Flourish as examples of the kinds of tools that remote, lay first responders are using. There are now quite robust tools for tabular data, but we’re still relatively early in the development of tools for handling mapping data — obviously critical in the crisis context. Unfolded.ai, which I profiled earlier this year, is working on building scalable geospatial analytics in the browser. A lot more can be done here.

Oftentimes there are ways to coordinate the coordinators. Develop for Good, which I looked at late last year, is a non-profit designed to connect enterprising computer science students to software and data projects at non-profits and agencies that needed help during the pandemic. Sometimes these coordinators are non-profit orgs, and sometimes, just very active Twitter accounts. There’s a lot more experimentation possible on how to coordinate efforts in a decentralized way while still engaging with professional first responders and the public sector.

Speaking of decentralization, it’s even possible that blockchain could play a role in disaster and crisis response. Many of these opportunities rest on using blockchain for evidence collection or for identity. For example, earlier this week Leigh Cuen took a careful look at an at-home sexual assault evidence collection kit from Leda Health that uses the blockchain to establish a clear time for when a sample was collected.

There is a lot more potential to harness the power of crowdsourcing and decentralization, and many of these projects have applications far outside disaster management itself. These tools not only solve real problems — they provide real community to people who may not be related to the disaster itself, but are enthusiastic to do their part to help others.

The black swans of black swans

In terms of startups, the three markets I identified — better training, better mental health, and better crowdsourcing collaboration tools, particularly around data — collectively represent a very compelling set of markets that will not only be valuable for founders, but can rapidly improve lives.

In his book Normal Accidents, Charles Perrow talks about how an increasing level of complexity and coupledness in our modern technical systems all but guarantee disasters to occur. Add in a warming world as well as the intensity, frequency, and just plain unusualness of disasters arriving each year, and we are increasingly seeing entirely novel forms of emergencies we have never responded to before. Take most recently the ultra-frigid conditions in Texas that sapped power from its grid, leading to statewide blackouts for hours and days in some parts of the state.

Clark-Ginsberg said, “We are seeing these risks emerge that aren’t just typical wildfires — where we have a response structure that we can easily setup and manage the hazard, [we’re] very good at managing these typical disasters. There are more of these atypical disasters cropping up, and we have a very hard time setting up structures for this — the pandemic is a great example of that.”

He describes these challenges as “trans-boundary risk management,” disasters that cross bureaucratic lines, professions, societies, and means of action. “It takes a certain agility and the ability to move quickly and the ability to work in ways outside typical bureaucratic structures, and that is just challenging full stop,” he said.

The Future of Technology and Disaster Response

Even as we begin to have better point solutions to the individual problems that disasters and their responses require, we can’t be remiss in neglecting the more systematic challenges that these emergencies are bringing to the fore. We have to start thinking about bringing humans together faster and in more novel ways to be the most effective, while coupling them flexibly and with agility to the best tools that meet their needs in the moment. That’s probably not literally “a startup,” but more a way of thinking about what it means to construct a disaster response fresh given the information available.

Amanda Levin, a policy analyst at the Natural Resources Defense Council, said that “even if we mitigate, there are huge pressures and huge impacts today from a warming world … even if we stop emissions today, [they] will still persist.” As one of my interviewees in government service who asked to go unnamed noted about disaster response, “You always are coming up short somewhere.” The problems are only getting harder, and we humans need much better tools to match the man-made trials we created for ourselves. That’s the challenge — and opportunity — for a tough century ahead.

When the Earth is gone, at least the internet will still be working

The internet is now our nervous system. We are constantly streaming and buying and watching and liking, our brains locked into the global information matrix as one universal and coruscating emanation of thought and emotion.

What happens when the machine stops though?

It’s a question that E.M. Forster was intensely focused on more than a century ago in a short story called, rightly enough, “The Machine Stops,” about a human civilization connected entirely through machines that one day just turn off.

Those fears of downtime are not just science fiction anymore. Outages aren’t just missing a must-watch TikTok clip. Hospitals, law enforcement, the government, every corporation — the entire spectrum of human institutions that constitute civilization now deeply rely on connectivity to function.

So when it comes to disaster response, the world has dramatically changed. In decades past, the singular focus could be roughly summarized as rescue and mitigation — save who you can while trying to limit the scale of destruction. Today though, the highest priority is by necessity internet access, not just for citizens, but increasingly for the on-the-ground first responders who need bandwidth to protect themselves, keep abreast of their mission objectives, and have real-time ground truth on where dangers lurk and where help is needed.

While the sales cycles might be arduous as we learned in part one and the data trickles have finally turned to streams in part two, the reality is that none of that matters if there isn’t connectivity to begin with. So in part three of this series on the future of technology and disaster response, we’re going to analyze the changing nature of bandwidth and connectivity and how they intersect with emergencies, taking a look at how telcos are creating resilience in their networks while defending against climate change, how first responders are integrating connectivity into their operations, and finally, exploring how new technologies like 5G and satellite internet will affect these critical activities.

Wireless resilience as the world burns

Climate change is inducing more intense weather patterns all around the world, creating second- and third-order effects for industries that rely on environmental stability for operations. Few industries have to be as dynamic to the changing context as telecom companies, whose wired and wireless infrastructure is regularly buffeted by severe storms. Resiliency of these networks isn’t just needed for consumers — it’s absolutely necessary for the very responders trying to mitigate disasters and get the network back up in the first place.

Unsurprisingly, no issue looms larger for telcos than access to power — no juice, no bars. So all three of America’s major telcos — Verizon (which owns TechCrunch’s parent company Verizon Media, although not for much longer), AT&T and T-Mobile — have had to dramatically scale up their resiliency efforts in recent years to compensate both for the demand for wireless and the growing damage wrought by weather.

Jay Naillon, senior director of national technology service operations strategy at T-Mobile, said that the company has made resilience a key part of its network buildout in recent years, with investments in generators at cell towers that can be relied upon when the grid cannot. In “areas that have been hit by hurricanes or places that have fragile grids … that is where we have invested most of our fixed assets,” he said.

Like all three telcos, T-Mobile pre-deploys equipment in anticipation for disruptions. So when a hurricane begins to swirl in the Atlantic Ocean, the company will strategically fly in portable generators and mobile cell towers in anticipation of potential outages. “We look at storm forecasts for the year,” Naillon explained, and do “lots of preventative planning.” They also work with emergency managers and “run through various drills with them and respond and collaborate effectively with them” to determine which parts of the network are most at risk for damage in an emergency. Last year, the company partnered with StormGeo to accurately predict weather events.

Predictive AI for disasters is also a critical need for AT&T. Jason Porter, who leads public sector and the company’s FirstNet first-responder network, said that AT&T teamed up with Argonne National Laboratory to create a climate-change analysis tool to evaluate the siting of its cell towers and how they will weather the next 30 years of “floods, hurricanes, droughts and wildfires.” “We redesigned our buildout … based on what our algorithms told us would come,” he said, and the company has been elevating vulnerable cell towers four to eight feet high on “stilts” to improve their resiliency to at least some weather events. That “gave ourselves some additional buffer.”

AT&T has also had to manage the growing complexity of creating reliability with the chaos of a climate-change-induced world. In recent years, “we quickly realized that many of our deployments were due to weather-related events,” and the company has been “very focused on expanding our generator coverage over the past few years,” Porter said. It’s also been very focused on building out its portable infrastructure. “We essentially deploy entire data centers on trucks so that we can stand up essentially a central office,” he said, empathizing that the company’s national disaster recovery team responded to thousands of events last year.

Particularly on its FirstNet service, AT&T has pioneered two new technologies to try to get bandwidth to disaster-hit regions faster. First, it has invested in drones to offer wireless services from the sky. After Hurricane Laura hit Louisiana last year with record-setting winds, our “cell towers were twisted up like recycled aluminum cans … so we needed to deploy a sustainable solution,” Porter described. So the company deployed what it dubs the FirstNet One — a “dirigible” that “can cover twice the cell coverage range of a cell tower on a truck, and it can stay up for literally weeks, refuel in less than an hour and go back up — so long-term, sustainable coverage,” he said.

AT&T’s FirstNet One dirigible to offer internet access from the air for first responders. Image Credits: AT&T/FirstNet

Secondly, the company has been building out what it calls FirstNet MegaRange — a set of high-powered wireless equipment that it announced earlier this year that can deploy signals from miles away, say from a ship moored off a coast, to deliver reliable connectivity to first responders in the hardest-hit disaster zones.

As the internet has absorbed more of daily life, the norms for network resilience have become ever more exacting. Small outages can disrupt not just a first responder, but a child taking virtual classes and a doctor conducting remote surgery. From fixed and portable generators to rapid-deployment mobile cell towers and dirigibles, telcos are investing major resources to keep their networks running continuously.

Yet, these initiatives are ultimately costs borne by telcos increasingly confronting a world burning up. Across conversations with all three telcos and others in the disaster response space, there was a general sense that utilities just increasingly have to self-insulate themselves in a climate-changed world. For instance, cell towers need their own generators because — as we saw with Texas earlier this year — even the power grid itself can’t be guaranteed to be there. Critical applications need to have offline capabilities, since internet outages can’t always be prevented. The machine runs, but the machine stops, too.

The trend lines on the frontlines are data lines

While we may rely on connectivity in our daily lives as consumers, disaster responders have been much more hesitant to fully transition to connected services. It is precisely in the middle of a tornado and the cell tower is down that you realize a printed map might have been nice to have. Paper, pens, compasses — the old staples of survival flicks remain just as important in the field today as they were decades ago.

Yet, the power of software and connectivity to improve emergency response has forced a rethinking of field communications and how deeply technology is integrated on the ground. Data from the frontlines is extremely useful, and if it can be transmitted, dramatically improves the ability of operations planners to respond safely and efficiently.

Both AT&T and Verizon have made large investments in directly servicing the unique needs of the first responder community, with AT&T in particular gaining prominence with its FirstNet network, which it exclusively operates through a public-private partnership with the Department of Commerce’s First Responder Network Authority. The government offered a special spectrum license to the FirstNet authority in Band 14 in exchange for the buildout of a responder-exclusive network, a key recommendation of the 9/11 Commission, which found that first responders couldn’t communicate with each other on the day of those deadly terrorist attacks. Now, Porter of AT&T says that the company’s buildout is “90% complete” and is approaching 3 million square miles of coverage.

Why so much attention on first responders? The telcos are investing here because in many ways, the first responders are on the frontiers of technology. They need edge computing, AI/ML rapid decision-making, the bandwidth and latency of 5G (which we will get to in a bit), high reliability, and in general, are fairly profitable customers to boot. In other words, what first responders need today are what consumers in general are going to want tomorrow.

Cory Davis, director of public safety strategy and crisis response at Verizon, explained that “more than ever, first responders are relying on technology to go out there and save lives.” His counterpart, Nick Nilan, who leads product management for the public sector, said that “when we became Verizon, it was really about voice [and] what’s changed over the last five [years] is the importance of data.” He brings attention to tools for situational awareness, mapping, and more that are a becoming standard in the field. Everything first responders do “comes back to the network — do you have the coverage where you need it, do you have the network access when something happens?”

The challenge for the telcos is that we all want access to that network when catastrophe strikes, which is precisely when network resources are most scarce. The first responder trying to communicate with their team on the ground or their operations center is inevitably competing with a citizen letting friends know they are safe — or perhaps just watching the latest episode of a TV show in their vehicle as they are fleeing the evacuation zone.

That competition is the argument for a completely segmented network like FirstNet, which has its own dedicated spectrum with devices that can only be used by first responders. “With remote learning, remote work and general congestion,” Porter said, telcos and other bandwidth providers were overwhelmed with consumer demand. “Thankfully we saw through FirstNet … clearing that 20 MHz of spectrum for first responders” helped keep the lines clear for high-priority communications.

FirstNet’s big emphasis is on its dedicated spectrum, but that’s just one component of a larger strategy to give first responders always-on and ready access to wireless services. AT&T and Verizon have made prioritization and preemption key operational components of their networks in recent years. Prioritization gives public safety users better access to the network, while preemption can include actively kicking off lower-priority consumers from the network to ensure first responders have immediate access.

Nilan of Verizon said, “The network is built for everybody … but once we start thinking about who absolutely needs access to the network at a period of time, we prioritize our first responders.” Verizon has prioritization, preemption, and now virtual segmentation — “we separate their traffic from consumer traffic” so that first responders don’t have to compete if bandwidth is limited in the middle of a disaster. He noted that all three approaches have been enabled since 2018, and Verizon’s suite of bandwidth and software for first responders comes under the newly christened Verizon Frontline brand that launched in March.

With increased bandwidth reliability, first responders are increasingly connected in ways that even a decade ago would have been unfathomable. Tablets, sensors, connected devices and tools — equipment that would have been manual are now increasingly digital.

That opens up a wealth of possibilities now that the infrastructure is established. My interview subjects suggested applications as diverse as the decentralized coordination of response team movements through GPS and 5G; real-time updated maps that offer up-to-date risk analysis of how a disaster might progress; pathfinding for evacuees that’s updated as routes fluctuate; AI damage assessments even before the recovery process begins; and much, much more. In fact, when it comes to the ferment of the imagination, many of those possibilities will finally be realized in the coming years — when they have only ever been marketing-speak and technical promises in the past.

Five, Gee

We’ve been hearing about 5G for years now, and even 6G every once in a while just to cause reporters heart attacks, but what does 5G even mean in the context of disaster response? After years of speculation, we are finally starting to get answers.

Naillon of T-Mobile noted that the biggest benefit of 5G is that it “allows us to have greater coverage” particularly given the low-band spectrum that the standard partially uses. That said, “As far as applications — we are not really there at that point from an emergency response perspective,” he said.

Meanwhile, Porter of AT&T said that “the beauty of 5G that we have seen there is less about the speed and more about the latency.” Consumers have often seen marketing around voluminous bandwidths, but in the first-responder world, latency and edge computing tends to be the most desirable features. For instance, devices can relay video to each other on the frontlines, without necessarily needing a backhaul to the main wireless network. On-board processing of image data could allow for rapid decision-making in environments where seconds can be vital to the success of a mission.

That flexibility is allowing for many new applications in disaster response, and “we are seeing some amazing use cases coming out of our 5G deployments [and] we have launched some of our pilots with the [Department of Defense],” Porter said. He offered an example of “robotic dogs to go and do bomb dismantling or inspecting and recovery.”

Verizon has made innovating on new applications a strategic goal, launching a 5G First Responders Lab dedicated to guiding a new generation of startups to build at this crossroads. Nilan of Verizon said that the incubator has had more than 20 companies across four different cohorts, working on everything from virtual reality training environments to AR applications that allow firefighters to “see through walls.” His colleague Davis said that “artificial intelligence is going to continue to get better and better and better.”

Blueforce is a company that went through the first cohort of the Lab. The company uses 5G to connect sensors and devices together to allow first responders to make the best decisions they can with the most up-to-date data. Michael Helfrich, founder and CEO, said that “because of these new networks … commanders are able to leave the vehicle and go into the field and get the same fidelity” of information that they normally would have to be in a command center to receive. He noted that in addition to classic user interfaces, the company is exploring other ways of presenting information to responders. “They don’t have to look at a screen anymore, and [we’re] exploring different cognitive models like audio, vibration and heads-up displays.”

5G will offer many new ways to improve emergency responses, but that doesn’t mean that our current 4G networks will just disappear. Davis said that many sensors in the field don’t need the kind of latency or bandwidth that 5G offers. “LTE is going to be around for many, many more years,” he said, pointing to the hardware and applications taking advantage of LTE-M standards for Internet of Things (IoT) devices as a key development for the future here.

Michael Martin of emergency response data platform RapidSOS said that “it does feel like there is renewed energy to solve real problems,” in the disaster response market, which he dubbed the “Elon Musk effect.” And that effect definitely does exist when it comes to connectivity, where SpaceX’s satellite bandwidth project Starlink comes into play.

Satellite uplinks have historically had horrific latency and bandwidth constraints, making them difficult to use in disaster contexts. Furthermore, depending on the particular type of disaster, satellite uplinks can be astonishingly challenging to setup given the ground environment. Starlink promises to shatter all of those barriers — easier connections, fat pipes, low latencies and a global footprint that would be the envy of any first responder globally. Its network is still under active development, so it is difficult to foresee today precisely what its impact will be on the disaster response market, but it’s an offering to watch closely in the years ahead, because it has the potential to completely upend the way we respond to disasters this century if its promises pan out.

Yet, even if we discount Starlink, the change coming this decade in emergency response represents a complete revolution. The depth and resilience of connectivity is changing the equation for first responders from complete reliance on antiquated tools to an embrace of the future of digital computing. The machine is no longer stoppable.


Future of Technology and Disaster Response Table of Contents


Data was the new oil, until the oil caught fire

We’ve been hearing how “data is the new oil” for more than a decade now, and in certain sectors, it’s a maxim that has more than panned out. From marketing and logistics to finance and product, decision-making is now dominated by data at all levels of most big private orgs (and if it isn’t, I’d be getting a résumé put together, stat).

So it might be a something of a surprise to learn that data, which could transform how we respond to the increasingly deadly disasters that regularly plague us, has been all but absent from much of emergency response this past decade. Far from being a geyser of digital oil, disaster response agencies and private organizations alike have for years tried to swell the scope and scale of the data being inputted into disaster response, with relatively meager results.

That’s starting to change though, mostly thanks to the internet of things (IoT), and frontline crisis managers today increasingly have the data they need to make better decisions across the resilience, response, and recovery cycle. The best is yet to come — with drones flying up, simulated visualizations, and artificial intelligence-induced disasters — what we’re seeing today on the frontlines is only the beginning of what could be a revolution in disaster response in the 2020s.

The long-awaited disaster data deluge has finally arrived

Emergency response is a fight against the fog of war and the dreadful ticking of the clock. In the midst of a wildfire or hurricane, everything can change in a matter of seconds — even milliseconds if you aren’t paying attention. Safe roads ferrying evacuees can suddenly become impassable infernos, evacuation teams can reposition and find themselves spread far too thin, and unforeseen conditions can rapidly metastasize to cover the entire operating environment. An operations center that once had perfect information can quickly find it has no ground truth at all.

Unfortunately, even getting raw data on what’s happening before and during a disaster can be extraordinarily difficult. When we look at the data revolution in business, part of the early success stems from the fact that companies were always heavily reliant on data to handle their activities. Digitalization was and is the key word: moving from paper to computers in order to transform latent raw data into a form that was machine-readable and therefore analyzable. In business, the last ten years was basically upgrading to version two from version one.

In emergency management however, many agencies are stuck without a version at all. Take a flood — where is the water and where is it going? Up until recently, there was no comprehensive data on where waters rose from and where they sloshed to. When it came to wildfires, there were no administrative datasets on where every tree in the world was located and how prone each is to fire. Even human infrastructure like power lines and cell towers often had little interface with the digital world. They stood there, and if you couldn’t see them, they couldn’t see you.

Flood modeling is on the cutting edge of disaster planning and response. Image Credits: CHANDAN KHANNA/AFP via Getty Images

Models, simulations, predictions, analysis: all of these are useless without raw data, and in the disaster response realm, there was no detailed data to be found.

After years of promising an Internet of Things (IoT) revolution, things are finally internet-izing, with IoT sensors increasingly larding up the American and world landscape. Temperature, atmospheric pressure, water levels, humidity, pollution, power, and other sensors have been widely deployed, emitting constant streams of data back into data warehouses ready for analysis.

Take wildfires in the American West. It wasn’t all that long ago that the U.S. federal government and state firefighting agencies had no knowledge of where a blaze was taking place. Firefighting has been “100 years of tradition unimpeded by progress,” Tom Harbour, head of fire response for a decade at the U.S. Forest Service and now chief fire officer at Cornea put it.

And he’s right. After all, firefighting is a visceral activity — responders can see the fires, even feel the burning heat echoing off of their flesh. Data wasn’t useful, particularly in the West where there are millions of acres of land and large swaths are sparsely populated. Massive conflagrations could be detected by satellites, but smoldering fires in the brush would be entirely invisible to the geospatial authorities. There’s smoke over California — exactly what is a firefighter on the ground supposed to do with such valuable information?

Today after a decade of speculative promise, IoT sensors are starting to clear a huge part of this fog. Aaron Clark-Ginsberg, a social scientist at RAND Corporation who researches community resilience, said that air quality sensors have become ubiquitous since they are “very cheap [and] pretty easy to use” and can offer very fine-grained understandings of pollution — a key signal, for instance, of wildfires. He pointed to the company Purple Air, which in addition to making sensors, also produces a popular consumer map of air quality, as indicative of the potential these days for technology.

Maps are the critical intersection for data in disasters. Geospatial information systems (GIS) form the basis for most planning and response teams, and no company has a larger footprint in the sector than privately-held Esri. Ryan Lanclos, who leads public safety solutions at the company, pointed to the huge expansion of water sensors as radically changing responses to certain disasters. “Flood sensors are always pulsing,“ he said, and with a “national water model coming out of the federal government ,” researchers can now predict through GIS analysis how a flood will affect different communities with a precision unheard of previously.

Digital maps and GIS systems are increasingly vital for disaster planning and response, but paper still remains quite ubiquitous. Image Credits: Paul Kitagaki Jr.-Pool/Getty Images

Cory Davis, the director of public safety strategy and crisis response at Verizon (which, through our parent company Verizon Media, is TechCrunch’s ultimate owner), said that all of these sensors have transformed how crews work to maintain infrastructure as well. “Think like a utility that is able to put a sensor on a power line — now they have sensors and get out there quicker, resolve it, and get the power back up.”

He noted one major development that has transformed sensors in this space the last few years: battery life. Thanks to continuous improvements in ultra-low-power wireless chips as well as better batteries and energy management systems, sensors can last a really long time in the wilderness without the need for maintenance. “Now we have devices that have ten-year battery lives,” he said. That’s critical, because it can be impossible to connect these sensors to the power grid in frontier areas.

The same line of thinking holds true at T-Mobile as well. When it comes to preventative planning, Jay Naillon, senior director of national technology service operations strategy at the telco, said that “the type of data that is becoming more and more valuable for us is the storm surge data — it can make it easier to know we have the right assets in place.” That data comes from flood sensors that can offer real-time warnings signals to planners across the country.

Telecom interest — and commercial interest in general — has been critical to accelerating the adoption of sensors and other data streams around disasters. While governments may be the logical end user of flood or wildfire data, they aren’t the only ones interested in this visibility. “A lot of consumers of that information are in the private sector,” said Jonathan Sury, project director at the National Center for Disaster Preparedness at the Earth Institute at Columbia University. “These new types of risks, like climate change, are going to affect their bottom lines,” and he pointed to bond ratings, insurance underwriting and other areas where commercial interest in sensor data has been profound.

Sensors may not literally be ubiquitous, but they have offered a window into the ambiguity that emergency managers have never had visibility into before.

Finally, there is the extensive datasets around mobile usage that have become ubiquitous throughout much of the world. Facebook’s Data for Good project, for instance, provides data layers around connectivity — are users connecting from one place and then later connecting from a different location, indicating displacement? That sort of data from the company and telcos themselves can help emergency planners scout out how populations are shifting in real-time.

Data, data, on the wall — how many AIs can they call?

Rivulets of data have now turned into floods of information, but just like floodwaters rising in cities across the world, the data deluge now needs a response all its own. In business, the surfeit of big data has been wrangled with an IT stack from data warehouses all the way to business intelligence tools.

If only data for disasters could be processed so easily. Data relevant for disasters is held by dozens of different organizations spanning the private, public, and non-profit sectors, leading to huge interoperability problems. Even when the data can be harmonized, there are large challenges in summarizing the findings down to an actual decision a frontline responder can use in their work — making AI a tough sale still today, particularly outside of planning. As Davis of Verizon put it, “now that they have this plethora of data, a lot of cities and federal agencies are struggling with how to use it.”

Unfortunately, standardization is a challenge at all scales. Globally, countries mostly lack interoperability, although standards are improving over time. Amir Elichai, the founder and CEO of 911 call-handling platform Carbyne, said that “from a technology standpoint and a standards standpoint, there is a big difference between countries,” noting that protocols from one country often have to be completely rewritten to serve a different market.

Tom Cotter, director of emergency response and preparedness at health care disaster response organization Project HOPE, said that even setting up communications between responders can be challenging in an international environment. “Some countries allow certain platforms but not others, and it is constantly changing,” he said. “I basically have every single technology communication platform you can possibly have in one place.”

One senior federal emergency management official acknowledged that data portability has become increasingly key in procurement contracts for technology, with the government recognizing the need to buy commercially-available software rather than custom-designed software. That message has been picked up by companies like Esri, with Lanclos stating that “part of our core mission is to be open and … create data and to share that openly to the public or securely through open standards.”

For all its downsides though, the lack of interoperability can be ironically helpful for innovation. Elichai said that the “lack of standards is an advantage — you are not buying into a legacy standard,” and in some contexts where standards are lacking, quality protocols can be built with the assumption of a modern data workflow.

Even with interoperability though, the next challenge becomes data sanitation — and disaster data is dirty as … well, something. While sensor streams can be verified and cross-checked with other datasets, in recent years there has been a heavy increase in the quantity of citizen-submitted information that has to be carefully vetted before it is disseminated to first responders or the public.

With citizens having more access to smartphones than ever, emergency planners have to sanitize uploaded data uploaded in order to verify and make it useful. Image Credits: TONY KARUMBA/AFP via Getty Images

Bailey Farren, CEO and co-founder of disaster communications platform Perimeter, said that “sometimes citizens have the most accurate and real-time information, before first responders show up — we want citizens to share that with …government officials.” The challenge is how to filter the quality goods from the unhelpful or malicious. Raj Kamachee, the CIO of Team Rubicon, a non-profit which assembles teams of volunteer military veterans to respond to natural disasters, said that verification is critical, and it’s a key element of the infrastructure he has built at the organization since joining in 2017. “We’ve gotten more people using it so more feedback [and] more data [is] coming through the pipes,” he said. “So creating a self-service, a very collaborative approach.”

With quality and quantity, the AI models should come, right? Well, yes and no.

Sury of Columbia wants to cool down at least some of the hype around AI. “The big caveat with all of these machine learning and big data applications is that they are not a panacea — they are able to process a lot of disparate information, [but] they’re certainly not going to tell us exactly what to do,” he said. “First responders are already processing a lot of information,” and they don’t necessarily need more guidance.

Instead, AI in disasters is increasingly focused on planning and resilience. Sury pointed to OneConcern, a resiliency planning platform, as one example of how data and AI can be combined in the disaster planning process. He also pointed to the CDC’s Social Vulnerability Index and risk tools from FEMA that integrate different data signals into scalar values by emergency planners to optimize their contingency plans.

Yet, almost everyone I talked to was much more hesitant about the power of AI. As I discussed a bit in part one of this series regarding the disaster sales cycle, data tools have to be real-time and perfect every time given the lives that are on the line. Kamachee of Team Rubicon noted that when choosing tools, he avoids whiz-bang and instead looks at the pure utility of individual vendors. “We go high tech, but we prepare for low tech,” he said, empathizing that in disaster response, everything must be agile and adaptable to changing circumstances.

Elichai of Carbyne saw this pattern in his sales. There’s a “sensitivity in our market and the reluctance from time to time to adopt” new technologies he said, but acknowledged that “there is no doubt that AI at a certain point will provide benefits.”

Naillon of T-Mobile had similar views from the operator perspective, saying that “I can’t say that we really leverage AI very much” in the company’s disaster planning. Instead of AI as brain, the telecom company simply uses data and forecast modeling to optimally position equipment — no fancy GANs required.

Outside of planning, AI has helped in post-disaster recovery, and specifically around damage assessments. After a crisis transpires, assessments of infrastructure and private property have to be made in order for insurance claims to be filed and for a community to move forward. Art delaCruz, COO and president of Team Rubicon, noted that technology and a flourish of AI has helped significantly around damage assessments. Since his organization often helps rebuild communities in the course of its work, triaging damage is a critical element of its effective response strategy.

There’s a brighter future, other than that brightness from the sun that is going to burn us to a crisp, right?

So AI today is helping a bit with resilience planning and disaster recovery and not so much during emergency response itself, but there is certainly more to come across the entire cycle. Indeed, there is a lot of excitement about the future of drones, which are increasingly being used in the field, but there are concerns long term about whether AI and data will ultimately cause more problems than they solve.

Drones would seem to have an obvious value for disaster response, and indeed, they have been used by teams to get additional aerial footage and context where direct access by responders is limited. Kamachee of Team Rubicon noted that in the Bahamas on a mission, response teams used drones to detect survivors, since major roads were blocked. The drones snapped images that were processed using AI, and helped the team to identify those survivors for evacuation. He described drones and their potential as “sexy; very, very cool.”

Aerial views from drones can give disaster response teams much better real-time information, particularly in areas where on-the-ground access is limited. Image Credits: Mario Tama/Getty Images

Cotter of Project HOPE similarly noted that faster data processing translates to better responses. “Ultimately speed is what saves lives in these disasters,” he said. We’re “also able to manage more responses remotely [and] don’t have to send as many people downrange,” giving response teams more leverage in resource-constrained environments.

“I see more emergency management agencies using drone technology — search and rescue, aerial photography,” Davis of Verizon said, arguing that operators often have a mentality of “send a machine into a situation first.” He continued, arguing, “artificial intelligence is going to continue to get better and better and better [and] enable our first responders to respond more effectively, but also more efficiently and safer.”

With data flooding in from sensors and drones and processed and verified better than ever, disaster response can improve, perhaps even better than Mother Nature can galvanize her increasingly deadly whims. Yet, there is one caveat: will the AI algorithms themselves cause new problems in the future?

Clark-Ginsburg of RAND, perhaps supplying that typical RANDian alternatives analysis, said that these solutions can also create problems themselves, “technological risks leading to disaster and the world of technology facilitating disaster.” These systems can break, they can make mistakes, and more ominously — they can be sabotaged to increase chaos and damage.

Bob Kerrey, a co-chair of the 9/11 Commission, former senator and governor of Nebraska, and currently the board chairman of Risk & Return, a disaster response VC fund and philanthropy I profiled recently, pointed to cybersecurity as increasingly a wild card in many responses. “There wasn’t a concept called zero days — let alone a market for zero days — in 2004 [when the 9/11 Commission was doing its work], and now there is.” With the 9/11 terrorist attacks, “they had to come here, they had to hijack planes … now you don’t need to hijack planes to damage the United States,” noting that hackers “can be sitting with a bunch of other guys in Moscow, in Tehran, in China, or even your mother’s basement.”

Data is a revolution in the making for disaster response, but it may well cause a whole second-order set of problems that didn’t exist before. What is giveth is taketh away. The oil gushes, but then the well suddenly runs dry – or simply catches fire.


Future of Technology and Disaster Response Table of Contents