The ancient Romans were masters of engineering, constructing vast networks of roads, aqueducts, ports, and massive buildings, whose remains have survived for two millennia. Many of these structures were built with concrete: Rome’s famed Pantheon, which has the world’s largest unreinforced concrete dome and was dedicated in A.D. 128, is still intact, and some ancient Roman aqueducts still deliver water to Rome today. Meanwhile, many modern concrete structures have crumbled after a few decades.
Researchers have spent decades trying to figure out the secret of this ultradurable ancient construction material, particularly in structures that endured especially harsh conditions, such as docks, sewers, and seawalls, or those constructed in seismically active locations.
Now, a team of investigators from MIT, Harvard University, and laboratories in Italy and Switzerland, has made progress in this field, discovering ancient concrete-manufacturing strategies that incorporated several key self-healing functionalities. The findings are published today in the journal Science Advances, in a paper by MIT professor of civil and environmental engineering Admir Masic, former doctoral student Linda Seymour ’14, PhD ’21, and four others.
For many years, researchers have assumed that the key to the ancient concrete’s durability was based on one ingredient: pozzolanic material such as volcanic ash from the area of Pozzuoli, on the Bay of Naples. This specific kind of ash was even shipped all across the vast Roman empire to be used in construction, and was described as a key ingredient for concrete in accounts by architects and historians at the time.
Under closer examination, these ancient samples also contain small, distinctive, millimeter-scale bright white mineral features, which have been long recognized as a ubiquitous component of Roman concretes. These white chunks, often referred to as “lime clasts,” originate from lime, another key component of the ancient concrete mix. “Ever since I first began working with ancient Roman concrete, I’ve always been fascinated by these features,” says Masic. “These are not found in modern concrete formulations, so why are they present in these ancient materials?”
Previously disregarded as merely evidence of sloppy mixing practices, or poor-quality raw materials, the new study suggests that these tiny lime clasts gave the concrete a previously unrecognized self-healing capability. “The idea that the presence of these lime clasts was simply attributed to low quality control always bothered me,” says Masic. “If the Romans put so much effort into making an outstanding construction material, following all of the detailed recipes that had been optimized over the course of many centuries, why would they put so little effort into ensuring the production of a well-mixed final product? There has to be more to this story.”
Upon further characterization of these lime clasts, using high-resolution multiscale imaging and chemical mapping techniques pioneered in Masic’s research lab, the researchers gained new insights into the potential functionality of these lime clasts.
Historically, it had been assumed that when lime was incorporated into Roman concrete, it was first combined with water to form a highly reactive paste-like material, in a process known as slaking. But this process alone could not account for the presence of the lime clasts. Masic wondered: “Was it possible that the Romans might have actually directly used lime in its more reactive form, known as quicklime?”
Studying samples of this ancient concrete, he and his team determined that the white inclusions were, indeed, made out of various forms of calcium carbonate. And spectroscopic examination provided clues that these had been formed at extreme temperatures, as would be expected from the exothermic reaction produced by using quicklime instead of, or in addition to, the slaked lime in the mixture. Hot mixing, the team has now concluded, was actually the key to the super-durable nature.
“The benefits of hot mixing are twofold,” Masic says. “First, when the overall concrete is heated to high temperatures, it allows chemistries that are not possible if you only used slaked lime, producing high-temperature-associated compounds that would not otherwise form. Second, this increased temperature significantly reduces curing and setting times since all the reactions are accelerated, allowing for much faster construction.”
During the hot mixing process, the lime clasts develop a characteristically brittle nanoparticulate architecture, creating an easily fractured and reactive calcium source, which, as the team proposed, could provide a critical self-healing functionality. As soon as tiny cracks start to form within the concrete, they can preferentially travel through the high-surface-area lime clasts. This material can then react with water, creating a calcium-saturated solution, which can recrystallize as calcium carbonate and quickly fill the crack, or react with pozzolanic materials to further strengthen the composite material. These reactions take place spontaneously and therefore automatically heal the cracks before they spread. Previous support for this hypothesis was found through the examination of other Roman concrete samples that exhibited calcite-filled cracks.
To prove that this was indeed the mechanism responsible for the durability of the Roman concrete, the team produced samples of hot-mixed concrete that incorporated both ancient and modern formulations, deliberately cracked them, and then ran water through the cracks. Sure enough: Within two weeks the cracks had completely healed and the water could no longer flow. An identical chunk of concrete made without quicklime never healed, and the water just kept flowing through the sample. As a result of these successful tests, the team is working to commercialize this modified cement material.
“It’s exciting to think about how these more durable concrete formulations could expand not only the service life of these materials, but also how it could improve the durability of 3D-printed concrete formulations,” says Masic.
Through the extended functional lifespan and the development of lighter-weight concrete forms, he hopes that these efforts could help reduce the environmental impact of cement production, which currently accounts for about 8 percent of global greenhouse gas emissions. Along with other new formulations, such as concrete that can actually absorb carbon dioxide from the air, another current research focus of the Masic lab, these improvements could help to reduce concrete’s global climate impact.
The research team included Janille Maragh at MIT, Paolo Sabatini at DMAT in Italy, Michel Di Tommaso at the Instituto Meccanica dei Materiali in Switzerland, and James Weaver at the Wyss Institute for Biologically Inspired Engineering at Harvard University. The work was carried out with the assistance of the Archeological Museum of Priverno in Italy.
Late yesterday, The Information reported that it had seen internal Twitter Slack communications confirming that the company had intentionally cut off third-party Twitter app access to its APIs. The shut-down, which happened Thursday night US time, hasn’t affected all apps and services that use the API but instead appears targeted at the most popular third-party Twitter clients, including Tweetbot by Tapbots and Twitterrific by The Iconfactory. More than two days later, there’s still no official explanation from Twitter about why it chose to cut off access to its APIs with no warning whatsoever.
To say that Twitter’s actions are disgraceful is an understatement. Whether or not they comply with Twitter’s API terms of service, the lack of any advanced notice or explanation to developers is unprofessional and an unrecoverable breach of trust between it and its developers and users.
Twitter’s actions also show a total lack of respect for the role that third-party apps have played in the development and success of the service from its earliest days. Twitter was founded in 2006, but it wasn’t until the iPhone launched about a year later that it really took off, thanks to the developers who built the first mobile apps for the service.
By 2007 when the iPhone launched, The Iconfactory’s Twitter client, Twitterrific, was already a hit on the Mac and had played a role in coining the term ‘tweet’. The app was first to use a bird icon and character counter too. And, although the iPhone SDK was still months off, The Iconfactory was already experimenting with bringing Twitterrific to the iPhone with the help of a jailbroken iPhone and class dumps of iPhone OS.
Despite the popularity of the iPhone, Twitter didn’t build its own mobile app. Instead, the company purchased Loren Brichter’s Tweetie in 2010. Not only was Tweetie a beautifully designed app that performed better on early iPhones than many of its competitors, but it also introduced the world to ‘pull-to-refresh, a UI detail that was later folded into iOS itself. Twitter re-skinned Tweetie as the official Twitter app, followed up not long after by a flawed iPad app, and later, a Mac version in early 2011 that Brichter had been working on at the time of the acquisition.
Even after Twitter had its own suite of apps, the third-party app market flourished. Tweetbot by Tapbots came along in 2011 and quickly became a favorite of many users, distinguishing itself with its steady stream of new power-user features and thoughtful design. But it wasn’t long before Twitter’s relationships with third-party developers began to sour. It started with a vague set of rules introduced in 2012 that preferred CRM and analytics apps over clients like Twitterrific and Tweetbot. The ups and downs over the years that followed are too numerous to count, but the consequence was that for many years few new Twitter clients were developed.
However, relations began to thaw with the announcement of version 2.0 of the Twitter API, which went into effect in 2021. Not only did the API update make new features available, but Twitter promised to loosen restrictions on third-party developers. That led to renewed interest in third-party client development, resulting in innovative new features in apps like Spring, which was a runner-up in the Best App Update category of the 2022 MacStories Selects awards.
As it turns out, Twitter’s developer detente was short-lived. The fact that Elon Musk’s Twitter has cut off third-party developers isn’t surprising. Franky, I expected it sooner, but I didn’t expect it to be done with an utter lack of respect for the developers who played such a critical role in the service’s success for more than a decade. The number of people who use third-party Twitter apps may be small in comparison to the service’s overall user base, but the role of the developers of those apps and the value of those power-users to Twitter’s success is outsized by comparison. The developers of Twitterrific, Tweetbot, and every other app that has lost access to Twitter’s APIs deserved better than a silent flip of the switch late one Thursday night.
There have been many moments over the past decade when we worried that third-party Twitter apps might have met their end and didn’t. Unfortunately, I think this time, those past fears have been realized.
The MacStories team moved its social media presence to Mastodon in mid-December. So, if you’ve lost the use of your favorite third-party Twitter app and are looking for an alternative place to keep up on everything Apple, you can follow us there:
Also, Tweetbot users will be happy to know that Tapbots is working on a Mastodon client called Ivory that is currently in beta and should be released soon. We’ll have coverage of Ivory on MacStories as soon as it’s released publicly.
Founded in 2015, Club MacStories has delivered exclusive content every week for over six years.
In that time, members have enjoyed nearly 400 weekly and monthly newsletters packed with more of your favorite MacStories writing as well as Club-only podcasts, eBooks, discounts on apps, icons, and services. Join today, and you’ll get everything new that we publish every week, plus access to our entire archive of back issues and downloadable perks.
The Club expanded in 2021 with Club MacStories+ and Club Premier. Club MacStories+ members enjoy even more exclusive stories, a vibrant Discord community, a rotating roster of app discounts, and more. And, with Club Premier, you get everything we offer at every Club level plus an extended, ad-free version of our podcast AppStories that is delivered early each week in high-bitrate audio.Join Now
Scooter executives are bailing out.
It’s an experience familiar to many errant scooter riders: you’re barreling toward a red light and you’re tapping the brakes but they aren’t working, so it’s time to jump and let the scooter go flying.
Bird’s CEO and CFO have stepped down. I’ve learned Lime’s CFO is leaving.
Travis VanderZanden — the co-founder of Bird who reportedly cashed out tens of millions worth of stock during the company’s meteoric rise in valuation — has stepped down as the company’s leader. VanderZanden will remain chairman of the company’s board. Shane Torchiana, an obscure former BCG partner who has been at Bird since 2018, is taking over as president and CEO.
Bird’s Chief Financial Officer Yibo Ling — like VanderZanden a former Uber executive — is calling it quits.
Meanwhile, Andrea Ellis, who helped Lime raise $523 million in convertible debt and term loan financing late last year, is leaving Lime, sources told me and the company confirmed.
While Lime is well-capitalized after the debt financing last year, Bird appears to be in real peril. Bird’s market capitalization has fallen to $83.4 million. Embarrassingly, the company is in trouble with the New York Stock Exchange because its share price has fallen below the minimum price of $1.
Bird’s stock closed at $0.35 Thursday and had fallen two cents Friday morning.
As of June, Bird had $57 million in cash and cash equivalents. Meanwhile, the company burned through $47 million in net cash from operating activities in the first six months of the year. It doesn’t take higher level math skills to start to wonder whether Bird’s days are numbered.
Winter is coming. Literally. The scooter business is extremely seasonal.
I wanted to take a moment to eulogize the venture scale scooter business. We’re seeing the full unraveling of the blitz scale, cash-on-fire venture model.
This is probably the end — for this tech cycle at least — of the idea that if only venture capitalists can throw enough money at something, it will become a tech company. With Juicero, WeWork, Zume Pizza, Soylent, and so many others — companies were designated “tech” startups because venture capitalists invested in them. VCs forgot that things were supposed to flow the other way.
I’m not saying scooters are actually dead. Flying car company Kittyhawk — well, that’s dead (or about to be). And there will be many other cash intensive “tech” businesses that bite the dust. Bird could survive, I guess — or sell. Lime could end up being the last man standing — and monopoly pricing could suddenly seem a lot more attractive.
But I feel confident in declaring — after catching up with former scooter employees — that this is not a venture-style business that’s going to reap venture-sized returns.
Lime can’t even bullshit me that they’re profitable. The company told me in a statement that they’re working to “drive closer to profitability.” That’s adjusted EBITDA profits that they’re driving toward. Lime isn’t profitable even if you squint and juke the stats.
“We ran the test and it turns out people don’t ride scooters. That’s just the problem,” one former scooter employee commiserated to me.
Like many ideas that attract hordes of venture capital dollars, the intuition was compelling: cheaper batteries and Chinese manufacturing paired with cellphones could make scooters a better, more sustainable business model than ride sharing. It actually seemed good for the world.
One Bird investor tried to explain the early hype. Investors had salivated over Bird’s top-line growth, this person explained to me. “I do have a note that their weekly run rate (granted it is a highly seasonal business) was over $300m. Just sharing this because if you just followed the top line numbers (not the economics) it would make it one of the fastest consumer revenue stories in venture history. (And plenty of other investments like YouTube were wildly unprofitable and had tremendous product/market fit as well that did work out). Even Google — one of the fastest growing consumer businesses ever didn’t come out this hot.”
Valuations soared. Announced in February 2018, David Sacks led the Series A at Craft which valued Bird at $60 million post-money. Sequoia led a $150 million Series C investment in Bird that valued it at $1 billion in 2018 and then led another $158 million round that valued Bird at $2.1 billion, according to Pitchbook. It was a major investment for Roelof Botha who would go on to become Sequoia’s senior steward.
Sequoia had missed out on Uber. In 2018 Uber was going public, valued at $82 billion in the IPO. This was a chance to get it right the second time.
Uber wasn’t as big of a miss as Sequoia might have thought — it’s worth $57 billion today. Bird is worth $83.4 million.
Lime is a similar story. Uber has poured hundreds of millions into the company. GV led a $330 million round in 2018 that valued Lime at $1.1 billion. Since Lime is still a private company, we don’t know what it’s worth today.
“It was the beginning of that pure VC ‘herd mentality’ period,” remembers someone from the on-demand world. “Almost everyone was trying to tell everyone over and over the economics didn’t make any sense, but no reason for anyone to listen.”
There is something to be said about a truly disastrous meal, a meal forever indelible in your memory because it’s so uniquely bad, it can only be deemed an achievement. The sort of meal where everyone involved was definitely trying to do something; it’s just not entirely clear what.
I’m not talking about a meal that’s poorly cooked, or a server who might be planning your murder—that sort of thing happens in the fat lump of the bell curve of bad. Instead, I’m talking about the long tail stuff – the sort of meals that make you feel as though the fabric of reality is unraveling. The ones that cause you to reassess the fundamentals of capitalism, and whether or not you’re living in a simulation in which someone failed to properly program this particular restaurant. The ones where you just know somebody’s going to lift a metal dome off a tray and reveal a single blue or red pill.
I’m talking about those meals.
At some point, the only way to regard that sort of experience—without going mad—is as some sort of community improv theater. You sit in the audience, shouting suggestions like, “A restaurant!” and “Eating something that resembles food” and “The exchange of money for goods, and in this instance the goods are a goddamn meal!” All of these suggestion go completely ignored.
That is how I’ve come to regard our dinner at Bros, Lecce’s only Michelin-starred restaurant, as a means of preserving what’s left of my sanity. It wasn’t dinner. It was just dinner theater.
No, scratch that. Because dinner was not involved. I mean—dinner played a role, the same way Godot played a role in Beckett’s eponymous play. The entire evening was about it, and guess what? IT NEVER SHOWED.
So no, we can’t call it dinner theater. Instead, we will say it was just theater.
Very, very expensive theater.
I realize that not everyone is willing or able to afford a ticket to Waiting for Gateau and so this post exists, to spare you our torment. We had plenty of beautiful meals in Lecce that were not this one, and if you want a lovely meal out, I’ll compile a list shortly.
But for now, let us rehash whatever the hell this was.
We headed to the restaurant with high hopes – eight of us in total, led into a cement cell of a room, Drake pumping through invisible speakers. It was sweltering hot, and no other customers were present. The décor had the of chicness of an underground bunker where one would expect to be interrogated for the disappearance of an ambassador’s child.
Earlier that day, we’d seen a statue of a bear, chiseled into marble centuries ago, by someone who had never actually seen a bear. This is the result:
And this is a perfect allegory for our evening. It’s as though someone had read about food and restaurants, but had never experienced either, and this was their attempt to recreate it.
What followed was a 27-course meal (note that “course” and “meal” and “27” are being used liberally here) which spanned 4.5 hours and made me feel like I was a character in a Dickensian novel. Because – I cannot impart this enough – there was nothing even close to an actual meal served. Some “courses” were slivers of edible paper. Some shots were glasses of vinegar. Everything tasted like fish, even the non-fish courses. And nearly everything, including these noodles, which was by far the most substantial dish we had, was served cold.
Amassing two-dozen of them together amounted to a meal the same way amassing two-dozen toddlers together amounts to one middle-aged adult.
I’ve checked Trip Advisor. Other people who’ve eaten at Bros were served food. Some of them got meat, and ravioli, and more than one slice of bread. Some of them were served things that needed to be eaten with forks and spoons.
We got a tablespoon of crab.
I’ve tried to come up with hypotheses for what happened. Maybe the staff just ran out of food that night. Maybe they confused our table with that of their ex-lover’s. Maybe they were drunk. But we got twelve kinds of foam, something that I can only describe as “an oyster loaf that tasted like Newark airport”, and a teaspoon of savory ice cream that was olive flavored.
I’m still not over that, to be honest. I thought it was going to be pistachio.
There is no menu at Bros. Just a blank newspaper with a QR code linking to a video featuring one of the chefs, presumably, against a black background, talking directly into the camera about things entirely unrelated to food. He occasionally used the proper noun of the restaurant as an adverb, the way a Smurf would. This means that you can’t order anything besides the tasting menu, but also that you are at the mercy of the servers to explain to you what the hell is going on.
The servers will not explain to you what the hell is going on.
They will not do this in Italian. They will not do this in English. They will not play Pictionary with you on the blank newspaper as a means of communicating what you are eating. On the rare occasion where they did offer an explanation for a dish, it did not help.
“These are made with rancid ricotta,” the server said, a tiny fried cheese ball in front of each of us.
“I’m… I’m sorry, did you say rancid? You mean… fermented? Aged?”
“Okay,” I said in Italian. “But I think that something might be lost in translation. Because it can’t be-”
“Rancido,” he clarified.
Another course – a citrus foam – was served in a plaster cast of the chef’s mouth. Absent utensils, we were told to lick it out of the chef’s mouth in a scene that I’m pretty sure was stolen from an eastern European horror film.
For reasons that could fill an entire volume of TimeLife Mysteries of the Unknown, THIS ITEM IS AVAILABLE FOR SALE AT THEIR GIFTSHOP. In case you want to have a restraining order filed against you this holiday season.
Now, at this point, I may have started quietly freaking out. A hierarchical pecking order was being established, and when you’re the one desperately slurping sustenance out of the plaster cast of someone else’s mouth, it’s safe to say you are at the bottom of that pyramid. We’d been beaten into some sort of weird psychological submission. Like the Stanford Prison Experiment but with less prison and more aspic. That’s the only reason I have for why we didn’t leave during any of these incidents:
No, we just sat there while the food was portioned out a teaspoon at a time, a persistent and sustained sort of agony, like slowly peeling off a band-aid. That’s the problem with a tasting menu. With so many courses, you just assume things are going to turn around. Every dish is a chance for redemption. Maybe this meal was like Nic Cage’s career – you have to wait a really long time for the good stuff, but there is good stuff.
BUT NO. We kept waiting for someone to bring us something – anything! – that resembled dinner. Until the exact moment when we realized: it would never come. It was when our friend Lisa tried to order another bottle of wine.
“Would you like red or white?” the server asked.
“What are we having for the main?” she inquired.
His face blanched.
“The… main, madame? Um… we’re about to move on to dessert.”
We sat for a moment, letting this truth settle over us. Because by now it had been hours, and at no point had we been served anything that could be considered dinner. (There was one time when I thought it might happen – the staff placed dishes in front of us, and then swirled sauces on the dishes, and I clapped my hands, excitedly waiting for something to be plated atop those beautiful sauces. Instead, someone came by with an eyedropper and squirted drops of gelee onto our plates).
“We’ve infused these droplets with meat molecules,” the server explained, and left.
I don’t know if our experience was the norm. I’ve looked TripAdvisor’s photo for Bros, and other people who’ve gone there seem to have been fed actual food. Like, even this person, who was served the same weird meat droplet course, at least got it with a triangle of foamy-looking bread. Do you know what it’s like to envy someone for a piece of foamy looking bread? IT’S NOT GREAT.
“There’s no … main?” Lisa said to us in disbelief after the server had retreated.
“Hey,” I said, my hand resting on her arm. She was shaking slightly from low blood sugar. “It’s okay.”
“They haven’t fucking fed us,” she said, her eyes wide.
“I know, I know,” I said, “But look. We’re in this amazing country. And I don’t know about you, but nothing is going to stop me from enjoying tonight.”
“Because I’m surrounded by my favorite people,” I said, and I squeezed Lisa’s hand for emphasis, “and I’m at my favorite restaurant.”
Lisa sputtered laughing. No more food was coming, but there was something freeing in that. Because this meal had never been about us to begin with. It sure as hell wasn’t about the food. And there is something glorious about finally giving up.
We sat through a few more courses including a marshmallow flavored like cuttlefish, and a dish called “frozen air” which literally melted before you could eat it, which felt like a goddamn metaphor for the night.
And then someone came in and demanded we stand and exit the restaurant. Thinking we were getting kicked out, we gleefully followed. Instead, we were led across the street, to a dark doorway and into the Bros laboratory. A video of the shirtless kitchen staff doing extreme sports played on a large screen TV while a chef cut us comically tiny slivers of fake cheese.
Rand was, of course, allergic to it.
The bill arrived. The meal cost more than any other we’d eat during our trip by a magnitude of three. They’d given us balloons with the restaurant’s name across it and the chef emerged and insisted on posing with us for a Polaroid that we did not ask for. We were finally released into the night, after every other restaurant had closed, ensuring that no food would be consumed that evening.
“That was abhorrent,” we all agreed as we shoved the balloons into a dumpster (I’d made everyone take one, with the baffling logic that they’d somehow help offset the cost of the meal). We howled at how ridiculous it was, and how they’d poisoned Rand. How maybe we should have known that a restaurant named “Bros” was going to be a disaster.
It was like an awful show that we had front row tickets to. But wasn’t there something glorious about sharing it together, the way that a terrible experience makes you all closer?
“No,” someone said, and we laughed even harder.
P.S. – The next day, one of the staff tried contacting the only single female member of our party via Instagram messages. “Hey, I served you last night!” he wrote. She immediately blocked him.
Bros., Via degli Acaya, 2, 73100 Lecce LE, Italy
Cost: a rather mortifying 130-200 Euros per person
Note: the TripAdvisor reviews show a lot of elaborate courses, and these were all way, way more food than anything we ate. I cannot express to you how little we were fed, and I’m not a particularly big eater. Allergy and dietary restrictions were largely ignored.
Recommendation: Do not eat here. I cannot express this enough. This was single-handedly one of the worse wastes of money in my entire food and travel writing career bwah ha ha ha ha ha ha oh my god
The post Bros., Lecce: We Eat at The Worst Michelin Starred Restaurant, Ever appeared first on The Everywhereist.
What would we do with abundant energy? I dream of virtually unlimited, clean, dirt-cheap energy, but lately, we have been going in the wrong direction. As J. Storrs Hall notes, in 1978 and 1979, American per capita primary energy consumption peaked at 12 kW. In 2019, we used 10.2 kW of primary energy (and in 2020, we used 9.4 kW, a figure skewed by the pandemic economy). We are doing more with less, squeezing out more value per joule than ever before. But why settle for energy efficiency alone? With many more joules, we could create much more value and live richer lives.
A benefit of climate change is that lots of smart people are rethinking energy, but I fear they aren’t going far enough. If we want not just to replace current energy consumption with low-carbon sources, but also to, say, increase global energy output by an order of magnitude, we need to look beyond wind and solar. Nuclear fission would be an excellent option if it were not so mired in regulatory obstacles. Fusion could do it, but it still needs a lot of work. Next-generation geothermal could have the right mix of policy support, technology readiness, and resource size to make a big contribution to abundant clean energy in the near future.
Let’s talk about resource size first. Stanford’s Global Climate and Energy Project estimates crustal thermal energy reserves at 15 million zetajoules. Coal + oil + gas + methane hydrates amount to 630 zetajoules. That means there is 23,800 times as much geothermal energy in Earth’s crust as there is chemical energy in fossil fuels everywhere on the planet. Combining the planet’s reserves of uranium, seawater uranium, lithium, thorium, and fossil fuels yields 365,030 zetajoules. There is 41 times as much crustal thermal energy than energy in all those sources combined. (Total heat content of the planet, including the mantle and the core, is about three orders of magnitude higher still.)
Although today’s geothermal energy is only harvested from spots where geothermal steam has made itself available at the surface, with some creative subsurface engineering it could be produced everywhere on the planet. Like nuclear energy, geothermal runs 24/7, so it helps solve the intermittency problem posed by wind and solar. Unlike nuclear energy, it is not highly regulated, which means it could be cheap in practice as well as in theory.
At a high level, the four main next-generation geothermal concepts I will discuss do the same thing. They (1) locate and access heat, (2) transfer subsurface heat to a working fluid and bring it to the surface, and (3) exploit the heat energy at the surface through direct use or conversion to electricity. It is the second step, transferring subsurface heat to a working fluid, that is non-obvious.
What is the right working fluid? What is the best way to physically transfer the heat? Given drilling costs, what is the right target rock temperature for heat transfer? These questions are still unresolved. Different answers will give you a different technical approach. Let’s talk about the four different concepts people are working on right now, including their strengths and weaknesses, before turning to the bottlenecks in the industry.
Like today’s conventional geothermal (“hydrothermal”) systems, enhanced geothermal systems (EGS) feature one or more injection wells where water goes into the ground, and one or more production wells where steam comes out of the ground. Hydrothermal systems today not only need heat resources close to the surface, they require the right kind of geology in the near subsurface. The rock between the injection and production wells needs to be permeable so that the water can flow through it and acquire heat energy. The rock above that layer needs to be impermeable, so that steam doesn’t escape to the surface except through the production wells.
EGS starts with the premise of using drilling technology to access deeper heat resources. This makes it viable in more places than hydrothermal, which relies on visual evidence of heat at the surface for project siting. If you see a volcano or a geyser or a fumarole, that might be a good location for a conventional hydrothermal project. But there are only a limited number of such sites, and if we want to expand the geographic availability of geothermal we have to use deeper wells to access heat sources that are further below ground.
Once we have our deeper wells, we need a way for water to flow between them. Fortunately, since 2005, petroleum engineers have gotten good at making underground fracture networks. By using modified versions of the fracking perfected in the shale fields, geothermal engineers can create paths of tiny cracks through which water can flow between the two wells. This fracture network has a lot of surface area, which means it is relatively good for imparting heat energy to the water.
EGS has some advantages over the other next-generation geothermal concepts. From a technical perspective, it is not a big leap from existing hydrothermal practice, so the technology risk is low. In addition, the high surface area of the hot underground fracture network is good for creating steam.
Yet today’s EGS also has a disadvantage relative to the other approaches. Because the system has an open reservoir exposed to the subsurface, most EGS projects plan to use water as a working fluid. Water does not become supercritical until it reaches 374ºC (and 22 MPa). Using today’s drilling technology, EGS projects usually will not reach these temperatures, because it costs too much to drill to the required depths. Fluids in their supercritical states have higher enthalpy than in their subcritical states, so depth limitations mean EGS can’t bring as much heat energy to the surface as it could if it had access to a supercritical fluid.
Even so, EGS is promising. This year, Fervo raised a $28M Series B to pursue this approach. It also signed a deal with Google to power some of its data centers, part of the search giant’s plan to move to 100% zero-carbon energy by 2030.
Imagine that, like EGS, you had an injection and a production well, but instead of relying on a network of fractures in the open subsurface to connect them, you simply connected the two wells with a pipe. The working fluid would flow down the injection well, horizontally through a lateral segment of pipe, and then up through the production well. Because such a system is closed to the subsurface, it is called a closed-loop system.
Relative to EGS, closed-loop systems have both advantages and disadvantages. A key advantage is that the working fluid can easily be something other than water. Isobutane has a critical temperature of 134.6ºC, and CO2’s is only 31.0ºC. Even with today’s drilling technology, we can reach these temperatures almost everywhere on the planet. Closed-loop systems offer the higher enthalpy associated with supercritical fluids at depths we can reach today. In addition, closed-loop systems work no matter the underlying geology, removing a risk that EGS projects face.
The big disadvantage of closed-loop systems is that pipes have much lower surface areas than fracture networks. Since heat is imparted to the working fluid by surface contact, this limits the rate at which the system can acquire energy. A solution to this is to use not just one horizontal segment, but many, like the radiator-style designs shown below. These segments can be numerous and long enough to ensure adequate heat transfer.
The problem remains, however, that these radiator-style segments are expensive to drill with today’s technology. It is possible that with experience and better drilling techniques the cost could be reduced to make this approach viable. Closed-loop startup Eavor is pursuing this approach, starting with a project in Germany taking advantage of that country’s generous geothermal subsidies.
What if you could combine the advantages of closed loops—like the ability to use a supercritical working fluid—with a way to capture the heat from a much larger surface area than that of a simple pipe? That’s the goal of Sage Geosystems’s Heat Roots concept.
Sage starts with a single vertical shaft. From the base of the shaft, they frack downwards to create a fracture pattern that gives the impression of a root system for a tree. They fill this “root” system with a convective and conductive fluid. Then, using a pipe-in-pipe system, they circulate a separate working fluid from the surface to the base of the shaft and back. At the base of the shaft, a heat exchanger takes the energy concentrated by the heat root system and imparts it to the working fluid.
This “heat roots” approach enables a lot of the benefits of closed-loop systems, like the ability to use supercritical fluids, without the main drawback of needing long horizontal pipe segments. The roots draw in and concentrate heat from greater depths than the primary shaft. In other words, closed-loop’s problem of limited surface area is solved by doing additional subsurface engineering outside of the closed loop.
A disadvantage of a monobore, pipe-in-pipe design is the limited flow rate of working fluid. In the oil and gas industry, the widest standard well diameter is 9⅝ inches. It would be non-trivial to go wider than that—you would need special drilling equipment and new casing systems. The power output of the entire system is directly proportional to the flow rate, so the monobore heat roots design is constrained in this way.
This may or may not be a problem. If the cost of constructing each individual well is low enough, then the solution would be to stamp out hundreds of thousands of these wells. What matters is the cost per watt and that the design is reproducible. It may be possible to make these or similar wells work almost anywhere by simply drilling deeply enough, although that is not yet proven.
Sage raised a Series A earlier this year and is currently working on a demonstration well in Texas. “Once we get through a successful pilot these next few months,” says Sage CTO Lance Cook, “we are off to the races.” In addition to its heat roots design, it is also studying a few other configurations.
What if we had much better drilling technology? Put aside the fancy stuff, like horizontal segments—what if we could simply drill straight down into the earth much deeper and faster and cheaper than we can today?
This one capability would unlock a huge increase in geothermal power density. With depth comes higher temperatures. If we could cheaply and reliably access temperatures around 500ºC, we could make water go supercritical. This would unleash a step-change in enthalpy, without the closed loops otherwise needed for supercritical fluids. By doing EGS (concept #1) in these hotter conditions, we could get the biggest benefit of EGS—a high surface area to use to transfer heat—with one of the biggest benefits of closed-loop systems—the use of a supercritical working fluid. In addition to higher enthalpy, supercritical steam will produce higher electrical output in virtue of a higher delta-T in the generator cycle. Output of the cycle is directly proportional to the temperature differential between the steam and ambient conditions.
The benefits of producing supercritical steam at the surface go beyond these physics-based arguments. A huge potential advantage would be the ability to retrofit existing coal plants. With many coal plants shutting down in the next several years, a lot of valuable generator equipment could be lying around idle. These generators take supercritical steam as an input and use it to produce electricity. The generators don’t care whether the steam comes from a boiler fired with coal or from 15 km underground. Piping steam from a geothermal production well straight into a coal plant turbine would allow the power plant to produce the same amount of electricity as it did under coal, except with no fuel costs and no carbon emissions.
Even if free generating equipment isn’t just lying around, supercritical geothermal steam could significantly increase the output and decrease the cost of geothermal electricity. The question is whether we can achieve the necessary cost reductions in ultra-deep drilling. Rotary drill bits struggle against hard basement rock. They break and then have to be retrieved to the surface, where they are repaired and sent back downhole. This process is time-consuming and expensive. Non-rotary drilling technologies like water hammers, lasers, plasma cutters, and mm-wave directed energy have all been proposed as ways to let us drill deeper faster. By optimizing for hot, dense, hard basement rock, we could drill much deeper than we can today.
The big downside of supercritical EGS is that these advanced drilling technologies haven’t been proven yet. The big advantage is what it could enable: high-density geothermal energy anywhere on the planet. Literally every location on the planet can produce supercritical steam if you drill deep enough into the basement rock—you may have to drill 20 km to reach 500ºC temperatures in some spots, but it’s there.
Quaise is an example of a company pursuing this supercritical EGS approach. The gyrotrons used in fusion experiments produce enough energy to vaporize granite. Quaise is commercializing mm-wave directed energy technology out of MIT’s Plasma Science and Fusion Center.
Unlike nuclear fission, which is regulated to near-oblivion, geothermal faces relatively few policy obstacles. I will highlight two areas where policy could easily be improved, but even if these problems are not fixed, they will likely only slow, not stop, maturation of the next-generation geothermal industry.
The first issue involves permitting. While our goal for this technology should be to enable geothermal anywhere on the planet, the natural starting point for working down the learning curve is in areas where high temperatures are closest to the surface. If you look at a map of temperature at depth in the United States, you will notice that the best spots for geothermal drilling overlap considerably with land owned by Uncle Sam.
Drilling on federal lands involves federal permitting—which involves environmental review. Environmental review, mandated by the National Environmental Policy Act any time a federal agency takes a major action that could affect the environment, can take years.
Conveniently, the oil and gas industry got themselves an exclusion from these requirements. The effects of drilling an oil and gas well on federal lands are rebuttably presumed to be insignificant, as long as certain limitations apply—for example, the surface disturbance of the well is less than 5 acres. Oil and gas wells are very similar to geothermal wells, so it makes sense that they would have very similar environmental impacts. As I have written for CGO, simply extending oil and gas’s categorical exclusion to geothermal energy is an absolute no-brainer.
This permitting issue shows that the nearly non-existent geothermal lobby is (surprise!) less effective than the oil and gas lobby. It may also be less effective than the wind and solar lobbies. Geothermal execs have complained that tax subsidies for geothermal are lower than for wind and solar. I am no tax expert, but if I am reading Section 48 of the tax code correctly, there is a 30% tax credit for utility-scale solar and only a 10% credit for a geothermal plant—that’s a big disparity. (There is also a 30% tax credit for investing in a facility to produce geothermal equipment and a 10-year 1.5¢-per-kWh subsidy for geothermal plants that break ground in 2021. [Update: It’s actually a 2.5¢/kWh subsidy because there is mandatory inflation adjustment and the basis is 1992. Hat tip: SW]).
Neither permitting barriers nor inadequate subsidization are likely to hold back geothermal forever. There are ways, however inconvenient, around the permitting obstacles, like operating on private lands. An unfavorable subsidy environment relative to solar might mean a slower start as financiers dip their toes into geothermal waters more gradually, or it might mean that projects move to Germany, where geothermal feed-in tariffs are quite generous. Even if they aren’t dealbreakers, we ought to fix these policy mistakes so that we can reap the benefits of abundant geothermal energy sooner rather than later.
Although some of the geothermal concepts I discussed above will work using today’s technology, there remains R&D to be done to unlock the others, and there are advances to be made that would help all players.
The first area where technical development is needed is in resource characterization—the ability to predict where the heat is in the subsurface and what geology surrounds it. Better predictions reduce project risk and reduce up-front exploration costs. Imagine you are drilling a geothermal well and it is not as hot as you expected it to be. Do you keep drilling and go deeper? Do you give up and drill somewhere else? Either way, it’s expensive. With more accurate predictions, we can keep these cost surprises under better control.
Machine learning is one possible way to crack resource characterization. The National Renewable Energy Laboratory has laid some good groundwork on machine learning and geothermal resources, and a startup called Zanskar is using what appears to be a similar approach. In addition to ML, bigger and more granular data sets as well as new sensor packages that could shed more light on subsurface conditions would be helpful.
Next: we need to harden rotary drill bits and other downhole equipment for geothermal conditions. Geothermal drilling involves higher temperature, pressure, vibration, and shock than oil and gas drilling. Since oil and gas represents the lion’s share of the drilling business, today’s bits aren’t optimized for geothermal conditions. A modern bottom hole assembly includes a drill bit and also equipment for electricity generation, energy storage, communication and telemetry, and monitoring and sensing. It’s a lot of electronics.
Fortunately, NASA and others in the space industry are already working on suitable high-temperature electronics. To land a rover on a planet like Venus or Mercury, or to send a probe into the atmosphere of a gas giant like Jupiter, we need motors, sensors, processors, and memory that will not fail soon after they encounter high heat and pressure. Venus’s average surface condition is 475ºC and 90 Earth atmospheres—if it works on Venus, it will work in all but the most demanding geothermal applications.
Third: we need to mature non-rotary drilling technologies. While polycrystalline diamond compact drill bits are now enabling next-generation geothermal applications for the first time, non-rotary concepts could allow us to cost-effectively go deeper through even harder rock. Non-rotary drilling concepts include water hammers, plasma bits, lasers, mm-wave, and even a highly speculative tungsten quasi-“rods from God” idea from Danny Hillis.
Fourth: technologies to support the use of supercritical fluids. Turbines need to be specially designed for supercritical fluids. While turbines already exist for supercritical water, new designs are necessary for lower-temperature fluids like supercritical CO2. In addition, supercritical fluids tend to be more corrosive than their subcritical counterparts, as well as under higher pressure, and so new coatings and casings may be needed to contain them in the subsurface.
There are other possible improvements, but if we can solve several of the above issues, my expectation is that we would generate a robust and self-sustaining industry that can self-fund the further development needed to make next-generation geothermal energy an absolute game-changer.
In an industry ruled by learning curves, what matters most is gaining experience in the field. We need all the companies working on innovative geothermal concepts to drill their demo wells and learn from them, so that they can move on to full-size wells and learn from those, so that they can operate at scale and learn from doing that, so that they can drive down costs (eventually) to almost nothing.
The rest of us should help them.
I have argued that the policy barriers, especially relative to fission, are not dealbreakers. But I continue to work to find policy solutions, because even non-dealbreaker problems can slow down progress. Policymakers who read this and want to learn more are welcome to reach out to me.
Adam Marblestone and Sam Rodriques have proposed Focused Research Organizations to tackle technological development problems not suited for either a startup, an academic team, or a national lab. Often, these problems arise when there is a high degree of coordinated system-building required and when the solutions are not immediately or directly monetizable. Some of the technology problems I described above, like producing a comprehensive dataset of subsurface conditions, developing temperature-hardened drilling equipment, or building systems to support supercritical fluids, may fit that bill. A geothermal-focused FRO supported by $50–100 million over the next 10 years could significantly accelerate progress.
If you want to learn more about progress in geothermal, I highly recommend registering for the upcoming PIVOT2021 conference, being held virtually July 19–23. It’s a comprehensive overview of the entire industry, and totally free. Yours truly is moderating the panel on regulatory and permitting challenges.
If we play our cards right, human civilization could soon have access to a virtually inexhaustible supply of cheap and clean energy. Shouldn’t we pull out all the stops to get there?