Launch HN: Silurian (YC S24) – Simulate the Earth

338 points by rejuvyesh 2 months ago | 155 comments

Hey HN! We’re Jayesh, Cris, and Nikhil, the team behind Silurian (https://silurian.ai). Silurian builds foundation models to simulate the Earth, starting with the weather. Some of our recent hurricane forecasts can be visualized at https://hurricanes2024.silurian.ai/.

What is it worth to know the weather forecast 1 day earlier? That’s not a hypothetical question, traditional forecasting systems have been improving their skill at a rate of 1 day per decade. In other words, today’s 6-day forecast is as accurate as the 5-day forecast ten years ago. No one expects this rate of improvement to hold steady, it has to slow down eventually, right? Well in the last couple years GPUs and modern deep learning have actually sped it up.

Since 2022 there has been a flurry of weather deep learning systems research at companies like NVIDIA, Google DeepMind, Huawei and Microsoft (some of them built by yours truly). These models have little to no built-in physics and learn to forecast purely from data. Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

Jayesh and Cris came face-to-face with this technology’s potential while they were respectively leading the [ClimaX](https://arxiv.org/abs/2301.10343) and [Aurora](https://arxiv.org/abs/2405.13063) projects at Microsoft. The foundation models they built improved on the ECMWF’s forecasts, considered the gold standard in weather prediction, while only using a fraction of the available training data. Our mission at Silurian is to scale these models to their full potential and push them to the limits of physical predictability. Ultimately, we aim to model all infrastructure that is impacted by weather including the energy grid, agriculture, logistics, and defense. Hence: simulate the Earth.

Before we do all that, this summer we’ve built our own foundation model, GFT (Generative Forecasting Transformer), a 1.5B parameter frontier model that simulates global weather up to 14 days ahead at approximately 11km resolution (https://www.ycombinator.com/launches/Lcz-silurian-simulate-t...). Despite the scarce amount of extreme weather data in historical records, we have seen that GFT is performing extremely well on predicting 2024 hurricane tracks (https://silurian.ai/posts/001/hurricane_tracks). You can play around with our hurricane forecasts at https://hurricanes2024.silurian.ai. We visualize these using [cambecc/earth] (https://github.com/cambecc/earth), one of our favorite open source weather visualization tools.

We’re excited to be launching here on HN and would love to hear what you think!

shoyer 2 months ago | next |

Glad to see that you can make ensemble forecasts of tropical cyclones! This absolutely essential for useful weather forecasts of uncertain events, and I am a little dissapointed by the frequent comparisons (not just you) of ML models to ECMWF's deterministic HRES model. HRES is more of a single realization of plausible weather, rather than an best estimate of "average" weather, so this is a bit of apples vs oranges.

One nit on your framing: NeuralGCM (https://www.nature.com/articles/s41586-024-07744-y), built by my team at Google, is currently at the top of the WeatherBench leaderboard and actually builds in lots of physics :).

We would love to metrics from your model in WeatherBench for comparison. When/if you have that, please do reach out.

cbodnar 2 months ago | root | parent | next |

Agree looking at ensembles is super essential in this context and this is what the end of our blogpost is meant to highlight. At the same time, a good control run is also a prerequisite for good ensembles.

Re NeuralGCM, indeed, our post should have said "*most* of these models". Definitely proves that combining ML and physics models can work really well. Thanks for your comments!

bbor 2 months ago | root | parent | prev |

HN never disappoints, jeez. Thanks for chiming in with some expert context! I highly recommend any meteoronoobs like me to check out the pdf version of the linked paper, the diagrams are top notch — https://www.nature.com/articles/s41586-024-07744-y.pdf

Main takeaway, gives me some hope:

  Our results provide strong evidence for the disputed hypothesis that learning to predict short-term weather is an effective way to tune parameterizations for climate. NeuralGCM models trained on 72-hour forecasts are capable of realistic multi-year simulation. When provided with historical SSTs, they capture essential atmospheric dynamics such as seasonal circulation, monsoons and tropical cyclones. 
But I will admit, I clicked the link to answer a more cynical question: why is Google funding a presumably super-expensive team of engineers and meteorologists to work on this without a related product in sight? The answer is both fascinating and boring:

  In recent years, computing has both expanded as a field and grown in its importance to society. Similarly, the research conducted at Google has broadened dramatically, becoming more important than ever to our mission. As such, our research philosophy has become more expansive than the hybrid approach to research we described in our CACM article six years ago and now incorporates a substantial amount of open-ended, long-term research driven more by scientific curiosity than current product needs.
From https://research.google/philosophy/. Talk about a cool job! I hope such programs rode the intimidation-layoff wave somewhat peacefully…

bruckie 2 months ago | root | parent |

Google uses a lot of weather data in their products (search, Android, maps, assistant, probably others). If they license it (they previously used AccuWeather and Weather.com, IIRC), it presumably costs money. Now that they generate it in house, maybe it costs less money?

(Former Google employee, but I have no inside knowledge; this is just my speculation from public data.)

Owning your own data and serving systems can also make previously impossible features possible. When I was a Google intern in 2007 I attended a presentation by someone who had worked on Google's then-new in-house routing system for Google Maps (the system that generates directions between two locations). Before, they licensed a routing system from a third party, and it was expensive ($) and slow.

The in-house system was cheap enough to be almost free in comparison, and it produced results in tens of milliseconds instead of many hundreds or even thousands of milliseconds. That allowed Google to build the amazing-at-the-time "drag to change the route" feature that would live-update the route to pass through the point under your cursor. It ran a new routing query many times per second.

d_burfoot 2 months ago | prev | next |

> These models have little to no built-in physics and learn to forecast purely from data. Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

Haha. The old NLP saying "every time I fire a linguist, my performance goes up", now applies to the physicists....

joshdavham 2 months ago | prev | next |

> Silurian builds foundation models to simulate the Earth, starting with the weather.

What else do you hope to simulate, if this becomes successful?

CSMastermind 2 months ago | root | parent | next |

The actual killer thing would be flooding. Insurance has invested billions into trying to simulate risk here and models are still relatively weak.

raprosse 2 months ago | root | parent | next |

100% aggree. Flooding is the single costliest natural disaster.

But it's non-trivial to scale these new techniques into the field. A major factor is the scale of interest. FEMA's FIRMaps are typically at a 10m resolution not 11km.

thechao 2 months ago | root | parent |

Low-income neighborhoods are good signal indicator for flooding high risk zones. There's a demographic angle, too.

dubcanada 2 months ago | root | parent |

Are you suggesting that flood prevention only happens in higher income neighbourhoods? Flood prevention tends to lie on the county engineers. Not so much private individuals to dictate. Doesn't matter how much money you have, you can't just dig up a road to put in proper flood prevention measures like drainage and grade.

pimlottc 2 months ago | root | parent | next |

The Army Corps of Engineers also does a lot of flood management work, and they use a cost/benefit analysis when deciding which projects to approve that takes into account the value of the real estate being protected. And even then, the local community has to put up a large share of the funding. So it definitely ends up favoring richer communities.

99 Percent Invisible did an episode about this recently:

https://99percentinvisible.org/episode/nbft-05-the-little-le...

legel 2 months ago | root | parent |

It was fascinating to see the counter-proposal to the Army Corps of Engineers for Miami's design of a downtown wall to deal with storm surges: https://dirt.asla.org/2022/09/12/uproar-causes-u-s-army-corp...

The counter proposal was indeed funded by the City of Miami, to point out how ridiculous it would be to have a 20 foot concrete wall around the city.

As a local resident, I loved seeing this sad 3D render in particular, which even has a graffiti on it nearly spelling "Berlin": https://i0.wp.com/dirt.asla.org/wp-content/uploads/2022/09/0...

In seriousness, it was really cool to see the counter proposal's "nature-based solution" which would design 39 acres of distributed barrier islands around the coastline, to block storm surge naturally.

tgtweak 2 months ago | root | parent | prev | next |

Would be an interesting relationship to explore. I think you can look at it as both cause and effect. Effect in that flooding destroys wealth and often-flooded areas will not have longstanding infrastructure or buildings - hits to the local real estate that result from flooding can affect non-flooded buildings as well. The cause could be because property and income taxes in low-income regions may be insufficient to fund infrastructure or public works that prevent or mitigate flooding and flood damage.

andai 2 months ago | root | parent | prev |

Extreme example, but I saw a video of a "homeless" family in Japan that lived on a flood plain. They lived there because it was the only free spot.

danielmarkbruce 2 months ago | root | parent | prev |

Why is it difficult? Is it predicting the amount of rain that is difficult? Or the terrain that will cause x amount of rain to cause problems? Or something else?

nikhil-shankar 2 months ago | root | parent | prev | next |

We want to branch out to industries which are highly dependent on weather. That way we can integrate their data together with our core competency: the weather and climate. Some examples include the energy grid, agriculture, logistics, and defense.

probablypower 2 months ago | root | parent |

you'll have trouble simulating the grid, but for energy data you might want to look at (or get in touch with) these people: https://app.electricitymaps.com/map

They're a cool little team based in Copenhagen. Would be useful, for example, to look at the correlation between your weather data and regional energy production (solar and wind). Next level would be models to predict national hydro storage, but that is a lot more complex.

My advice is to drop the grid itself to the bottom of the list, and I say this as someone who worked at a national grid operator as the primary grid analyst. You'll never get access to sufficient data, and your model will never be correct. You're better off starting from a national 'adequacy' level and working your way down based on information made available via market operators.

TwiztidK 2 months ago | root | parent | next |

Actually, it seems like a great time to get involved with the grid (at least in the US). In order to comply with FERC Order 881, all transmission operators need to adjust their line ratings based on ambient temperatures with hourly predictions 10 days into the future by mid 2025. Seems like that would present a great opportunity to work directly with the ISOs (which have regional models and live data) on improving weather data.

nikhil-shankar 2 months ago | root | parent | prev | next |

These are great resources, thank you. If you're open to it, we'd love to meet and chat about the energy space since we're newcomers to that arena. Shoot us an email at contact@silurian.ai

cshimmin 2 months ago | root | parent | prev |

Do earthquakes next!

Signed,

A California Resident

bbor 2 months ago | root | parent | next |

Seems hard… weather is a structure in the Piagetian sense, with lots of individual elements influencing each other via static forces. Earthquakes are-AFAIU as a non-expert Californian-more about physical rock structures within the crust that we have only a vague idea of. Although hey, hopefully I’m wrong; maybe there’s a kind of pre-earthquake tremor for some kinds of quake that a big enough transformer could identify…

markstock 2 months ago | root | parent | prev | next |

The Earth is a multi-physics complex system and OP claiming to "Simulate the Earth" is misleading. Methods that work on the atmosphere may not work on other parts. There are numerous scientific projects working on simulation earthquakes, both using ML and more "traditional" physics.

nikhil-shankar 2 months ago | root | parent | prev |

If there is sufficient data, we can train on it!

keyboardcaper 2 months ago | root | parent |

Would geolocated historical seismographic data do?

K0balt 2 months ago | root | parent | next |

I suspect (possibly incorrectly) that earthquakes are a chaotic phenomenon resulting from a multilayered complex system, a lot like a lottery ball picker.

Essentially random outputs from deterministic systems are unfortunately not rare in nature…. And I suspect that because of the relatively higher granularity of geology vs the semicohesive fluid dynamics of weather, geology will be many orders of magnitude more difficult to predict.

That said, it might be possible to make useful forecasts in the 1 minute to 1 hour range (under the assumption that major earthquakes often have a dynamic change in precursor events), and if accuracy was reasonable in that range, it would still be very useful for major events.

Looking at the outputs of chaotic systems like geolocated historical seismographic data might not be any more useful than 4-10 orders of magnitude better than looking at previous lottery ball selections in predicting the next ones…. Which is to say that the predictive power might still not be useful even though there is some pattern in the noise.

Generative AI needs a large and diverse training set to avoid overfitting problems. Something like high resolution underground electrostatic distribution might potentially be much more predictive than past outputs alone, but I don’t know of any such efforts to map geologic stress at a scale that would provide a useful training corpus.

bbor 2 months ago | root | parent | prev |

They’re empiricists — the only ~~real~~ conclusive way to answer that question is to try it, IMO!

The old ML maxim was “don’t expect models to do anything a human expert couldn’t do with access to the same data”, but that’s clearly going to way of Moore’s Law… I don’t think a meteorologist could predict 11km^2 of weather 10 days out very accurately, and I know for sure that a neuroscientists couldn’t recreate someone’s visual field based on fMRI data!

brunosan 2 months ago | prev | next |

Can we help you? We build the equivalent for land, as a non-profit. It's basically a geo Transformer MAE model (plus DINO, plus matrioska, plus ...), but largest and most trained (35 trillion pixels roughly). Most importantly fully open source and open license. I'd love to help you replace land masks with land embeddings, they should significantly help downscale the local effects (e.g. forest versus city) that afaik most weather forecast simplify with static land cover classes at most. https://github.com/Clay-foundation/model

nikhil-shankar 2 months ago | root | parent |

Hi, this looks really cool! Can we meet? Shoot us an email at contact@silurian.ai

jonplackett 2 months ago | root | parent |

Maybe between the two of you, you can tell me why my Alexa is telling me there’s no rain today, but it’s raining right now.

brunosan 2 months ago | root | parent | next |

You'll need to subscribe to Alexa weather plus, for only 9.99$/month. Now seriously, yes, hyperlocal short-term weather forecast should be a commodity, even public utility?

iammattmurphy 2 months ago | root | parent | next |

That makes me appreciate that in Vancouver we have Weatherhood, which is free to use.

tgtweak 2 months ago | root | parent |

I like accuweather's minutecast which is a higher resolution short-term forecast (+60 min) that is not just pulling the forecast for the nearest weather station to you.

Windy(.com) premium also has a great hybrid weather radar+forecast view which was recently released and which I find has been very effective at predicting rain at a specific location on the map vs "nearby". With smaller weather patterns it is entirely possible for it to rain a few blocks away but not at your location. An 11-KM resolution weather forecast (as referenced above) will not be able to capture this nuance.

gabinator 2 months ago | root | parent | prev |

In case you're curious -- computer scientists have been trying to simulate/predict weather over half a century and it's led to some really awesome math/compsci discoveries.

If you've ever heard of the Lorenz/Butterfly Effect/Strange Attractors, those chaotic systems were discovered because of a discrepancy between two parallel weather simulations. One preserved the original simulation's calculation train while the other started off with simply the previous results (out to like 10 decimals) and suffered from a rounding error and thus both simulations diverged hugely.

Lorenz was trying to simulate weather by subdividing the atmosphere into tons and tons of cubes. Really interesting reading/video watching tbh.

furiousteabag 2 months ago | prev | next |

Curious to see what other things you will simulate in the future!

Shameless plug: recently we've built a demo that allows you to search for objects in San Francisco using natural language. You can look for things like Tesla cars, dry patches, boats, and more. Link: https://demo.bluesight.ai/

We've tried using Clay embeddings but we quickly found out that they perform poorly for similarity search compared to embeddings produced by CLIP fine tuned on OSM captions (SkyScript).

brunosan 2 months ago | root | parent |

howdy! Clay makers here. Can you share more? Did you try Clay v1 or v0.2 What image size embeddings from what instrument?

We did try to relate OSM tags to Clay embeddings, but it didn't scale well. We did not give up, but we are re-considering ( https://github.com/Clay-foundation/earth-text ). I think SatClip plus OSM is a better approach. or LLM embeddings mapped to Clay embeddings...

furiousteabag 2 months ago | root | parent |

Hey hey! We tried Clay v1 with 768 embeddings size using your tutorials. We then split NAIP SF to chips and indexed them. Afterwards, we performed image-to-image similarity search like in your explorer.

We tried to search for bridges, beaches, tennis courts, etc. It worked, but it didn't work well. The top of the ranking was filled with unrelated objects. We found that similarity scores are stacked together too much (similarity values are between 0.91 and 0.92 with 4 digit difference, ~200k tiles), so the encoder made very little difference between objects.

I believe that Clay can be used with additional fine-tuning for classification and segmentation, but standalone embeddings are pretty poor.

Check this: https://github.com/wangzhecheng/SkyScript. It is a dataset of OSM tags and satellite images. CLIP fine-tuned on that gives good embeddings for text-to-image search as well as image-to-image.

sltr 2 months ago | prev | next |

Check out Climavision. They use AI to generate both hyper-local ("will there be a tornado over my town in the next 30 minutes?") and seasonal ("will there be a draught next fall?") forecasts, and they do it faster than the National Weather Service. They also operate their own private radar network to fill observational gaps.

Disclosure: I work there.

https://climavision.com/

bbor 2 months ago | prev | next |

Fascinating. I have two quick questions, if you find the time:

  …we’ve built our own foundation model, GFT (Generative Forecasting Transformer), a 1.5B parameter frontier model that simulates global weather…
I’m constantly scolding people for trying to use LLMs for non-linguistic tasks, and thus getting deceptively disappointing results. The quintessential example is arithmetic, which makes me immediately dubious of a transformer built to model physics. That said, you’ve obviously found great empirical success already, so something’s working. Can you share some of your philosophical underpinnings for this approach, if they exist beyond “it’s a natural evolution of other DL tech”? Does your transformer operate in the same rough way as LLMs, or have you radically changed the architecture to better approach this problem?

  Hence: simulate the Earth.
When I read “simulate”, I immediately think of physics simulations built around interpretable/symbolic systems of elements and forces, which I would usually put in basic opposition to unguided/connectionist ML models. Why choose the word “simulate”, given that your models are essentially black boxes? Again, a pretty philosophical question that you don’t necessarily have to have an answer to for YC reasons, lol

Best of luck, and thanks for taking the leap! Humanity will surely thank you. Hopefully one day you can claim a bit of the NWS’ $1.2B annual budget, or the US Navy’s $infinity budget — if you haven’t, definitely reach out to NRL and see if they’ll buy what you’re selling!

Oh and C) reach out if you ever find the need to contract out a naive, cheap, and annoyingly-optimistic full stack engineer/philosopher ;)

cbodnar 2 months ago | root | parent | next |

Re question 1: LLMs are already working pretty well for video generation (e.g. see Sora). You can also think of weather as some sort of video generation problem where you have hundreds of channels (one for each variable). So this is not inconsistent with other LLM success stories from other domains.

Re question 2: Simulations don't need to be explainable. Being able to simulate simply means being able to provide a resonable evolution of a system given some potential set of initial conditions and other constraints. Even for physics-based simulations, when run at huge scale like with weather, it's debatable to what degree they are "interpretable".

Thanks for your questions!

OrvalWintermute 2 months ago | prev | next |

Am skeptical about the business case for this given the huge government investment in part of this.

What will your differentiators be?

Are you paying for weather data products?

danielmarkbruce 2 months ago | root | parent |

Better on some dimension will work. More accurate, faster, more fine grained, something.

Better weather predictions are worth money, plain and simple.

amirhirsch 2 months ago | prev | next |

Weather models are chaotic, are ML methods more numerically stable than a physics based simulation? And how do they compare in terms of compute requirements? the Aurora paper seemed to be promising, but I would love a summary of comparison better than what I get out of Claude.

Once upon a time I converted spectral-transform-shallow-water-model (STSWM or parallelized as PSTSWM) from FORTRAN to Verilog. I believe this is the spectral-transform method we have run for the last 30 years to do forecasting. The forecasting would be ~20% different results for 10-day predictions if we truncated each operation to FP64 instead of Intel's FP80.

nikhil-shankar 2 months ago | root | parent |

Great questions.

1. The truth is we still have to investigate the the numerical stability of these models. Our GFT forecast rollouts are around 2 weeks (~60 steps) long and things are stable in in that range. We're working on longer-ranged forecasts internally.

2. The compute requirements are extremely favorable for ML methods. Our training costs are significantly cheaper than the fixed costs of the supercomputers that government agencies require and each forecast can be generated on 1 GPU over a few minutes instead of 1 supercomputer over a few hours.

3. There's a similar floating-point story in deep learning models with FP32, FP16, BF16 (and even lower these days)! An exciting area to explore

Angostura 2 months ago | prev | next |

Have you had a crack at applying this approach to the effectively unforecastable - earthquakes, for example?

ijustlovemath 2 months ago | prev | next |

> Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

It seems like this is another instance of The Bitter Lesson, no?

CharlesW 2 months ago | root | parent | next |

For anyone else who's also in today's lucky 10,000: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Alex-Programs 2 months ago | root | parent | next |

Thank you - I hadn't heard of it before. It seems to have parallels with LLMs - our most general intelligent systems have come from producing a workable architecture for what seems to be the bare minimum for communicating intelligence while also having plenty of training data (language), then simply scaling up.

I thought this was a good quote:

> We want AI agents that can discover like we can, not which contain what we have discovered.

agentultra 2 months ago | root | parent | prev | next |

I'm not sure I buy The Bitter Lesson, tbh.

Deep Blue wasn't a brute-force search. It did rely on heuristics and human knowledge of the domain to prune search paths. We've always known we could brute-force search the entire space but weren't satisfied with waiting until the heat death of the universe for the chance at an answer.

The advances in machine learning do use various heuristics and techniques to solve particular engineering challenges in order to solve more general problems. It hasn't all come down to Moore's Law.. which stopped bearing large fruit some time ago.

However that still comes at a cost. It requires a lot of GPUs, land, energy, and fresh water, and Freon for cooling. We'd prefer to use less of these resources if possible while still getting answers in a reasonable amount of time.

ijustlovemath 2 months ago | root | parent | next |

Deep blue had to use the techniques it did due to the limitations of the hardware of the time. Deep blue would almost certainly lose against AlphaZero, even if you tuned it to modern hardware. All you have to do 'manually' is teach it the rules/give it a loss function, then you just let it do its thing.

It's certainly true that "just throw a bunch of GPUs at it" is wasteful, but it does achieve results.

agentultra 2 months ago | root | parent |

Certainly does! We’ve had expert systems and various AI techniques for decades that weren’t efficient enough to run even though theoretically they would yield answers.

And even though solutions to many such problems were in the NP or NP-hard categories it didn’t mean that we couldn’t get useful results.

But it still gave us better results by applying what we know about search strategies and reinforcement to provide guidance and heuristics. Even Alpha didn’t use the most general algorithms and throw hardware at the problem. Still took quite a lot of specialized software and methods to fine-tune the overall system to produce the results we want.

FergusArgyll 2 months ago | root | parent | prev |

Today's best chess models use no heuristics, I think starting with stockfish 16 they got rid of HCE (hand crafted evaluation), they're now neural nets and would absolutely eat Deep Blue

photochemsyn 2 months ago | root | parent | prev | next |

That's a highly controversial claim that would need a whole host of published peer-reviewed research papers to support it. Physics-based simulations (initial state input, then evolve according to physics applied to grids) have improved but not really because of smaller grids, but rather by running several dozen different models and then providing the average (and the degree of convergence) as the forecast.

Notably forecast skill is quantifiable, so we'd need to see a whole lot of forecast predictions using what is essentially the stochastic modelling (historical data) approach. Given the climate is steadily warming with all that implies in terms of water vapor feedback etc., it's reasonable to assume that historical data isn't that great a guide to future behavior, e.g. when you start having 'once every 500 year' floods every decade, that means the past is not a good guide to the future.

yorwba 2 months ago | root | parent |

Given 50 states and independent weather in each state, on average one state would experience each "once every 500 years" extreme weather event every decade. Of course in reality weather is not independent across political borders, but there are also many more locations where flood levels can be measured than just one per state. So depending on the details "once every 500 years" may not be as rare as it sounds, even without deviation from historical patterns.

1wd 2 months ago | prev | next |

Does anyone predict economy/population/... by simulating individual people based on real census information? Monte carlo simulation of major events (births, death, ...) based on known statistics based on age, economic background, location, education, profession, etc.? It seems there are not that many people that this would be computationally infeasible, and states and companies have plenty of data to feed into such systems. Is it not needed because other alternatives give better results, or is it already being done?

jandrewrogers 2 months ago | root | parent | next |

I've done a lot of advanced research in this domain. It is far more difficult than people expect for a few reasons.

The biggest issue is that the basic data model for population behavior is a sparse metastable graph with many non-linearities. How to even represent these types of data models at scale is a set of open problem in computer science. Using existing "big data" platforms is completely intractable, they are incapable of expressing what is needed. These data models also tend to be quite large, 10s of PB at a bare minimum.

You cannot use population aggregates like census data. Doing so produces poor models that don't ground truth in practice for reasons that are generally understood. It requires having distinct behavioral models of every entity in the simulation i.e. a basic behavioral profile of every person. It is very difficult to get entity data sufficient to produce a usable model. Think privileged telemetry from mobile carrier backbones at country scales (which is a lot of data -- this can get into petabytes per day for large countries).

Current AI tech is famously bad at these types of problems. There is an entire set of open problems here around machine learning and analytic algorithms that you would need to research and develop. There is negligible literature around it. You can't just throw tensorflow or LLMs at the problem.

This is all doable in principle, it is just extremely difficult technically. I will say that if you can demonstrably address all of the practical and theoretical computer science problems at scale, gaining access to the required data becomes much less of a problem.

ag_rin 2 months ago | root | parent | prev | next |

I’m also super interested in this kind of question. The late Soviet Union and their cybernetics research were really into simulating this kind of stuff to improve the planned economy. But I’m curious if something like this can be done on a more local scale, to improve things like a single company output.

Nicholas_C 2 months ago | root | parent | prev | next |

Agent based modeling (ABM) is an attempt at this. I've wanted to forecast the economy on a per-person basis since playing Sim City as a kid (although Sim City is not an ABM to be clear). From doing a bit of research a while back it seemed like the research and real world forecasting have been done on a pretty small scale and nothing as grand as I'd hoped. It's been a while since I've looked into so I would be happy to be corrected.

cossatot 2 months ago | root | parent | prev |

Doyne Farmer's group at Oxford does 'agent-based' economics simulations in this vein. He has a new book called 'Making Sense of Chaos' that describes it.

7e 2 months ago | prev | next |

Every weather forecasting agency in the world is pivoting to ML methods, and some of them have very deep pockets and industry partnerships. Some big tech companies are forging ahead on their own. Unless you have proprietary data, you just bought yourself a low paying job with long hours. Typical poor judgement of naive YC founders. Founding a company is more exciting than being successful.

andrewla 2 months ago | prev | next |

Is the plan to expand from weather forecasting into climate simulation? Given the complexity of the finding initial conditions on the earth, a non-physical (or only implicitly-physical) model seems like it could offer a very promising alternative to physical models. The existing physical models, while often grossly correct (in terms of averages), suffer from producing unphysical configurations on a local basis.

nikhil-shankar 2 months ago | root | parent |

Yes, 100%! We'll still take a statistical/distributional approach to long-ranged climate behavior rather than trying to predict exact atmospheric states. Keep an eye out for more news on this

nxobject 2 months ago | prev | next |

Congratulations on splitting off to make some money! I remember reading about ClimaX a year ago and being extremely excited – especially because of the potential to lower the costs of large physical simulations like these.

Have specific industries reached out to you for your commerical potential – natural resource exploration, for example?

scottcha 2 months ago | prev | next |

Are you planning on open sourcing your code and/or model weights? Aurora code and weights were recently open sourced.

cbodnar 2 months ago | root | parent |

Not immediately, but we will consider open sourcing some of our future work. At least, we definitely plan to be very open with our metrics and how well (or bad) our models are doing.

legel 2 months ago | prev | next |

Congrats to Jayesh and team! I was lucky to meet the founding CEO recently, and happy to let everyone know he's very friendly and of course super intelligent.

As a fellow deep learning modeler of Earth systems, I can also say that what they're doing really is 100% top notch. Congrats to the team and YC.

abdellah123 2 months ago | prev | next |

Did you explore other branches of AI, namely KRLs? It's an underrated area especially in recent years.

Using the full expressive power of a programming language to model the real world and then execute AI algorithms on highly structured and highly understood data seems like the right way to go!

kristopolous 2 months ago | prev | next |

This really, really looks like a nullschool clone (https://earth.nullschool.net/). Is it not?

nikhil-shankar 2 months ago | root | parent | next |

Hi, it totally is. That's one of our favorite weather visualization projects. We're using Cameron Beccario's open source version of nullschool for our forecasts. We cited him above in the blurb and also on our about page (https://hurricanes2024.silurian.ai/about.html)

kristopolous 2 months ago | root | parent |

so what exactly are you launching that I can see here?

lighter943 2 months ago | root | parent | next |

I’m confused by this thread. The posters have mentioned that they are building their own foundation model for climate/weather prediction and are using a well known open source tool in the field for viz. Where’s the ambiguity here?

rybosome 2 months ago | root | parent | prev |

I suggest you read the post. Reading is typically how information is transmitted.

EDIT: the post I am responding to was altered to sound much less confrontational. It was originally:

> So what exactly are you “launching” and why does it require venture capital?

kristopolous 2 months ago | root | parent | prev |

Alright, what they presented, in the current state, is just a clone of a 10 year old project with a 2.5 month old weather forecast and some AI story attached to it.

rybosome 2 months ago | root | parent | next |

The project this “cloned” is just a data visualization tool. You can plug any data into it - good data, bad data.

They are launching an AI model which they claim produces higher quality weather data than traditional models relying on physical simulation. And they used this visualization library to make an engaging website.

Constructively, you have gotten to this position by overreacting to a perceived “clone” and failing to be enlightened by the numerous comments and the original post explaining the purpose.

Respectfully, I suggest you take a breath and try to disassociate from whatever emotional reaction you are having about this.

kristjansson 2 months ago | root | parent | prev | next |

> Silurian builds foundation models to simulate the Earth, starting with the weather.

It's the first line man. The visual is just a visual, their product is the data being visualized.

jay-barronville 2 months ago | root | parent | prev | next |

I don’t think I understand what your issue is with them. They used an open-source project to visualize their data, were open about doing so, and cited the creator of the project.

What more did you want from them? (Genuine question.)

kristopolous 2 months ago | root | parent |

something within the interface that more clearly illustrates their product differentiation.

nullschool is obscure enough to the general audience that when I saw it there was an immediate red flag.

If only specialized scientists can see the difference between the sites, it's a presentation problem.

LewisJEllis 2 months ago | root | parent | next |

The interface in question is the second link in the post. To get to the interface without any of the other relevant context, you would have to:

- skip reading the post (which explains all of this)

- skip the first link in the post (which explains all of this)

- go straight to the second link in the post, to the interface

- skip the "about" link in the interface (which explains all of this)

99catmaster 2 months ago | root | parent | prev |

Wow, that’s uncanny.

kristopolous 2 months ago | root | parent |

[flagged]

rybosome 2 months ago | root | parent |

You went to the effort of posting 2 HN comments, tweeting, and taking a screenshot because you are annoyed that this project used an open-source library and were transparent about doing so.

kristopolous 2 months ago | root | parent |

I didn't know it was open source. I thought it was a ripoff.

danielmarkbruce 2 months ago | root | parent |

A ripoff of the visualization layer? Even if it was, who cares? That's not the work. What's next, you think a new chess engine is a ripoff because they use a standard chess board for visualization? A new protein prediction model is a ripoff because they use the standard visualization?

kristopolous 2 months ago | root | parent |

There's a long precedent of knockoffs, scams, and skullduggery in silicon valley.

danielmarkbruce 2 months ago | root | parent | next |

The answer isn't to not even read what they are doing and just assume the worst.

kristopolous 2 months ago | root | parent |

They edited the post.

Regardless, you're just trying to personally attack me. That's a great use of both our time.

danielmarkbruce 2 months ago | root | parent |

It was rhetorical. And telling you that the answer is not to assume the worst without reading what they are doing is not a personal attack.

You are out here implying these guys are a fraud. Being told to pull your head in is not personal.

koolala 2 months ago | prev | next |

I'm hoping the singularity will coincide with a large-scale AI achieving simulated Earth consciousness. Human intelligence is only a spec compared to all the combined intelligence of nature.

xpe 2 months ago | root | parent |

What is "simulated Earth consciousness"?

salmonfamine 2 months ago | root | parent | next |

All of this AGI/singularity stuff is quite literally science fiction, so it can be whatever OP wants it to be.

xpe 2 months ago | root | parent | next |

The comment above seems too dismissive in my opinion. There is a lot of (credible and rational) (thinking and research) around what AGI might entail. There are also many interesting theories about consciousness that are worth considering. However, I don’t buy panpsychism nor notions of an “earth spirit”. Materialism works, best I can tell, and I’m not ready to throw it out. / I’m just asking for GP to explain.

koolala 2 months ago | root | parent | prev |

I have a specific idea in mind but this is true too :) AI = Imagination!

xpe 2 months ago | root | parent |

What is your idea?

koolala 2 months ago | root | parent |

A Large Language Model + A Large Earth-data Model merged into one. Like an image model mixed with a language model. It just needs a way to understand the pattern of Life like they appear to condense the patterns of language and thought.

koolala 2 months ago | root | parent | prev |

A merging of language consciousness like how LLMs act today combined with a new understanding of all the earth's natural life (not just human intelligence) so it could communicate a wholistic view of lifes complexities, beauty, and intelligence into all human languages.

Large Language Model + Large Earth Model

hwhwhwhhwhwh 2 months ago | prev | next |

So ChatGPT has a cutoff date on the stuff it can talk about. This predicting weather sounds like ChatGPT being able to predict next week's news from which it has been trained on. I can see how it can probably predict some stuff like Argentina winning a football match scheduled for next week when played against India given India sucks at football. But can it really give any useful predictions? Like can it predict things which are not public? Like who will Joe Rogan interview in 2 weeks? Or what would be the list of companies in YCs next batch?

sillysaurusx 2 months ago | root | parent | next |

Sure, not every model is an autoregressive transformer. And even a GPT could give some useful predictions if you stuff the context window with things it's been fine tuned to predict. We did that to get GPT to play chess a few years ago.

Specifically, I could imagine throwing current weather data at the model and asking it what it thinks the next most likely weather change is going to be. If it's accurate at all, then that could be done on any given day without further training.

The problems happen when you start throwing data at it that it wasn't trained on, so it'll be a cat and mouse game. But it's one I think the cat can win, if it's persistent enough.

nikhil-shankar 2 months ago | root | parent | prev | next |

Our training cutoff date was the end of 2022. Here's our blogpost on the 2024 hurricane season https://silurian.ai/posts/001/hurricane_tracks

hwhwhwhhwhwh 2 months ago | root | parent |

I just don't understand how can your produce new knowledge which it don't have access to. Are you you folks claiming the future weather is a function of previous weather and the model is capable of replicating the function?

counters 2 months ago | root | parent |

No one is claiming that there is "new knowledge" here.

The entire class of deep learning or AI-based weather models involves a very specific and simple modeling task. You start with a very large training set which is effectively a historical sequence of "4D pictures" of the atmosphere. Here, "4D" means that you have "pixels" for latitude, longitude, altitude, and time. You have many such pictures of these for relevant atmospheric variables like temperature, pressure, winds, etc. These sequences are produced by highly-sophisticated weather models run in what's called a "reanalysis" task, where they consume a vast array of observations and try to create the 4D sequence of pictures that are most consistent with the physics in the weather model and the various observations.

The foundation of AI weather models is taking that 4D picture sequence, and asking the model how to "predict" the next picture in the sequence, given the past 1 or 2 pictures. If you can predict the picture for 6 hours from now, then you can feed that output back into the model and predict the next 6 hours, and so on. AI weather models are trained such that this process is mostly stable, e.g. the small errors you begin to accumulate don't "blow up" the model.

Traditionally, you'd use a physics-based model to accomplish this task. Using the current 3D weather state as your input, you integrate the physics equations forward in time to make the prediction. In many ways, today's AI weather models can be thought of as a black box or emulator that reproduces what those physics-based models do - but without needing to be told much, if any of the underlying physics. Depending on your "flavor" of AI weather model, the architecture of the model might draw some analogies to the underlying physics. For example, NVIDIA's models use Fourier Neural Operators, so you can think of them as learning families of equations which can be combined to approximate the state of the atmosphere (I'm _vastly_ over-simplifying here). Google DeepMind's GraphCast tries to capture both local and non-local relationships between fields through it's graph attention mechanisms. Microsoft Aurora' (and Silurian's, by provenance, assuming it's the same general type of model) try to capture local relationships through sliding windows passed over the input fields.

So again - no new knowledge or physics. Just a surprisingly effective of applying traditional DL/AI tools to a specific problem (weather forecasting) that ends up working quite well in practice.

hwhwhwhhwhwh 2 months ago | root | parent |

Thanks for the explanation. I am still a bit confused how this takes care of the errors? I can see how the weather prediction for tomorrow might have less errors. But shouldn't the errors accumulate as you feed the predicted weather as the input for the model? Wouldn't the results start diverging from reality pretty soon? Isn't that the reason why the current limit is close to 6 days? How exactly does this model fixed this issue?

counters 2 months ago | root | parent |

It doesn't take care of the errors. They still "accumulate" over time, leading to the same divergence that traditional physics-based weather models experience. In fact, the hallmark that these AI models are _doing things right_ is they show realistic modes of error growth when compared with those physics-based models - and there is already early peer-reviewed literature suggesting this is the case.

This _class_ of models (not Aurora, or Silurian's model specifically) can potentially improve on this a bit by incorporating forecast error at longer lead times in their core training loss. This is already done in practice for some major models like GraphCast and Stormer. But these models are almost certainly not a magical silver bullet for 10x'ing forecast accuracy.

the_arun 2 months ago | root | parent | prev |

In India we use Natural Intelligence - Astrology - for predicting results. Note that it has high percentage of hallucinations.

SirLJ 2 months ago | prev | next |

How accurate is the weather prediction for a city for tomorrow on average for the min and max temperature? Thanks a lot!

baetylus 2 months ago | prev | next |

Exciting idea and seems like a well-proven team. Good luck to you guys here and don't mind the endemic snark in the other threads. A couple basic questions --

1. How will you handle one-off events like volcanic eruptions for instance? 2. Where do you start with this too? Do you pitch a meteorology team? Is it like a "compare and see for yourself"?

cbodnar 2 months ago | root | parent |

Volcanoes are a tricky one. There are a few volcanic eruptions in historical data, but it's unclear if this is enough to predict reasonably well how such future eruptions (especially at unseen locations) will affect the weather. Would be fun to look at some events and see what the model is doing. Thanks for the suggestion!

Re where do we start. A lot of organisations across different sectors need better weather predictions or simulations that depend on weather. Measuring the skill of such models is a relatively standard procedure and people can check the numbers.

julienlafond 2 months ago | prev | next |

How performed your Hurricanes forecast versus the reality?

nikhil-shankar 2 months ago | root | parent |

We explored several examples from the 2024 hurricane season in our blog post: https://silurian.ai/posts/001/hurricane_tracks. We overlaid the true paths of the hurricane over our predictions for everyone to see!

yellow_postit 2 months ago | root | parent |

I'm finding the posts confusing -- is the prediction the images?

What exactly is predicted and what is the actual path in those videos?

nikhil-shankar 2 months ago | root | parent |

In the videos the true path is the dashed line and the government prediction is the solid line. Our prediction, from our GFT model, is the animation which plays in the background.

resters 2 months ago | prev | next |

very cool! i was thinking of doing space weather simulation using vocap and a representation of signals in the spatial domain. maybe it could be added.

itomato 2 months ago | prev | next |

I keep waiting for someone to integrate data from NEON

cbodnar 2 months ago | root | parent |

I am curious, what would you do with this data if you had infinite resources?

itomato 2 months ago | root | parent |

I wouldn't need infinite resources, just a practical integration.

For one, I always thought it would be informative for things like game engines to have a reference point. How fast to streams typically flow in this type of environment? What tree species are even in this geo?

https://data.neonscience.org/data-api/graphql/explorer/build...

Where we're going, we don't need "Data Products".

bschmidt1 2 months ago | prev | next |

Wow, so excited for this.

I had a web app online in 2020-22 called Skim Day that predicted skimboarding conditions on California beaches that was mostly powered by weather APIs. The tide predictions were solid, but the weather itself was almost never right, especially wind speed. Additionally there were some missing metrics like slope of beach which changes significantly throughout the year and is very important for skimboarding.

Basically, I needed AI. And this looks incredible. Love your website and even the name and concept of "Generative Forecasting Transformer (GFT)" - very cool. I imagine the likes of Surfline, The Weather Channel, and NOAA would be interested to say the least.

cbodnar 2 months ago | root | parent | next |

That's pretty cool! Would be great to learn more about your app and how the wave/tide prediction was working. Is there some place to read more about this?