The same day, a post on reddit was about: "We built 3B and 8B models that rival GPT-5 at HTML extraction while costing 40-80x less - fully open source" [1].
Not fully equivalent to what is doing Skyvern, but still an interesting approach.
This is exactly the direction I am seeing agent go. They should be able to write their own tools and we are soon launching something about that.
That being said...
LLMS are amazing for some coding tasks and fail miserably at others. My hypothesis is that there is some sort of practical limit to how many concepts an LLM can hold into account no matter the context window given the current model architectures.
For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.
> For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.
Plan for solving this problem:
- Build a comprehensive design system with AI models
- Catalogue the components it fails on (like yours)
- These components are the perfect test cases for hiring challenges (immune to “cheating” with AI)
- The answers to these hiring challenges can be used as training data for models
- Newer models can now solve these problems
- You can vary this by framework (web component / React / Vue / Svelte / etc.) or by version (React v18 vs React v19, etc.)
What you’re doing with this is finding the exact contours of the edge of AI capability, then building a focused training dataset to push past those boundaries. Also a Rosetta Stone for translating between different frameworks.
I put a brain dump about the bigger picture this fits into here:
also training data quality. they are horrifyingly bad at concurrent code in general in my experience, and looking at most concurrent code in existence.... yeah I can see why.
The really depressing part about LLMs (and the limitations of ML more generally) is that humans are really bad at formal logic (which is what programming basically is), and instead of continuing the path of making machines that made it harder for us to get it wrong, we instead decided to toss every open piece of code/text in existence into a big machine that then reproduces those patterns non-deterministically and use that to build more programs.
One can see the results in a place where most code is terrible (data science is the place I see this most, as it's what I do mostly) but most people don't realise this. I assume this also happens for stuff like frontend, where I don't see the badness because I'm not an expert.
> is that humans are really bad at formal logic (which is what programming basically is),
The tricky part is that I don't think all programming is formal logic at all, just a small part. And this thing with that different code is for different purposes really screws up LLMs reasoning process unless you make it really clear what code is for what.
Lots of parts are more creative or more "for humans" I might say, like building the right abstractions considering the current context and potentially future contexts. There is no "right/wrong" abstractions, just abstractions with different tradeoffs, and lots of things in programming is like this, not a binary "this is correct, this is wrong", but somewhere along a spectrum of "This is what I subjectively prefer considering these tradeoffs".
There is a reason a lot of programmers see programming having lots of similarities with painting and other creative activities.
> Why do you say this? The foundation of all of computer science is formal logic and symbolic logic.
Yes, but also it has to deal with "the real world" which is only logical if you can encode a near infinite number of variables, instead we create leaky abstractions in order to actually get work done.
Codex (GPT-5) + Rust (with or without Tokio) seems to work out well for me, asking it to run the program and validate everything as it iterates on a solution. I've used the same workflow with Python programs too and seems to work OK, but not as well as with Rust.
Just for curiosities sake, what language have you been trying to use?
Or when code is fully vectorizable they default to using loops even if explicitly told not to yse loops. Code I got a LLM to solve for a fairly straightforward problem took 18 minutes to run.
my own solution? 1.56 seconds. I consider myself to be at an intermediate skill level, and while LLMs are useful, they likely wont replace any but the least talented programmers. Even then i'd value human with critial thinking paired with an LLM over an even more competent LLM.
With the upcoming release of Gemini 3.0 Pro, we might see a breakthrough for that particular issue. (Those are the rumors, at least.) I'm sure not fully solved, but possibly greatly improved.
I feel like this is how normal work is. When I have to figure out how to use a new app/api etc, I go through an initial period where I am just clicking around, shouting in the ether etc until I get the hang of it.
And then the third or fourth time its automatic. Its weird but sometimes I feel like the best way to make agents work is to metathink about how I myself work.
What? LLMs don't think nor learn in the sense humans do. They have absolutely no resemblance to a human being. This must be the most ridiculous statement I've read this year
How so? Your kid has a body that interacts with the physical world. An LLM is trained on terabytes of text, then modified by human feedback and rules to be a useful chatbot for all sorts of tasks. I don't see the similarity.
If you watch how agents attempt a task, fail, try to figure out what went wrong, try again, repeat a couple more times, then finally succeed -- you don't see the similarity?
LLMs don't do this. They can't think. If you just one for like five minutes it's obvious that just because the text on the screen says "Sorry, I made I mistake, there are actually 5 r's in strawberry", doesn't mean there's any thought behind it.
I mean, you can literally watch their thought process. They try to figure out reasons why something went wrong, and then identify solutions. Often in ways that require real deduction and creativity. And have quite a high success rate.
If that's not thinking, then I don't know what is.
You just haven't added the right tool together with the right system/developer prompt. Add a `add_memory` and `list_memory` (or automatically inject the right memories for the right prompts/LLM responses) and you have something that can learn.
You can also take it a step further and add automatic fine-tuning once you start gathering a ton of data, which will rewire the model somewhat.
I guess it depends on what you understand "learn" to mean.
But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.
We had a similar realization here at Thoughtful and pivoted towards code generation approaches as well.
I know the authors of Skyvern are around here sometimes --
How do you think about code generation with vision based approaches to agentic browser use like OpenAI's Operator, Claude Computer Use and Magnitude?
From my POV, I think the vision based approaches are superior, but they are less amenable to codegen IMO.
Off topic, but because the article mentioned improper usage of DOM, I put down the UK government's design system/accessibility. It's well documented, and I hope all governments have the same standard. I guess they paid a huge amount of money to consultants and vendors.
In AI First workshops. By now I tell them for the last exercise "no scrappers". the learning is to separate reasoning (AI) from data (that you have to bring.) and ai coded scrappers seem a logical, but always fail. scrapping is a scaling issue, not reasoning challenge. also the most interesting websites are not keen for new scrappers.
A point orthogonal to this; consider whether you need browser automation at all.
If a website isn't using Cloudflare or a JS-only design, it's generally better to skip playwright. All the major AIs understand beautifulsoup pretty well, and they're likely to write you a faster, less brittle scraper.
I mostly scrape government data so the sites are a little 'behind' on that trend, but no. Even JS heavy sites are almost always pulling from a JSON or graphql source under the hood.
At scale, dropping the heavier dependencies and network traffic of a browser is meaningful.
Yeah, reverse engineering APIs is another fantastic approach. They aren't enough if you are dealing with wizards (eg typeform), but they can work really well
You gain experience getting interactions with other agencies optimised by dealing with them yourself. If the AI you rely on fails, you are dead in the water. And I'm speaking as a fairly resilient 50 year old with plenty of hands-on experience, but concerned for the next generation. I know generational concern has existed since the invention of writing, and the world hasn't fallen apart, so what do I know? :)
Your example use case is automatically filling out an IRS form, operated by the sort of IRC department that makes a webform that's only up during business hours? Do you realize how legally risky that is to create, and how legally risky that will be to operate?
Over the past few days I've spent a lot of time dealing with terribly designed UIs. Some legitimate and desired use cases are impossible because poor logic excludes them.
Is AI capable of saying, "This website sucks, and doesn't work - file a complaint with the webmaster?"
I once had similar problems with the CIA's World Factbook. I shudder to think what an I would do there.
I tried skyvern like 6 mo ago and it didn’t work for scraping a site that sounds like welp. Ended up doing it myself. Was trying to scrape data across Bay Area.
That said I’d try it again but I don’t want to spend money again.
I can talk about this bypass because they've fixed it: a site I was scraping rolled their own custom captcha that was just multiple choice. But they didn't have a nonce, so I would just attempt all the choices, and one of them would let me in.
The captcha put you on notice that your scraping wasn't authorized. Depending on the details and circumstances, bypassing it and scraping anyways may have been a crime.
The same day, a post on reddit was about: "We built 3B and 8B models that rival GPT-5 at HTML extraction while costing 40-80x less - fully open source" [1].
Not fully equivalent to what is doing Skyvern, but still an interesting approach.
[1] https://www.reddit.com/r/LocalLLaMA/comments/1o8m0ti/we_buil...
This is really cool. We might integrate this into Skyvern actually - we've been looking for a faster HTML extraction engine
Thanks for sharing!
This is exactly the direction I am seeing agent go. They should be able to write their own tools and we are soon launching something about that.
That being said...
LLMS are amazing for some coding tasks and fail miserably at others. My hypothesis is that there is some sort of practical limit to how many concepts an LLM can hold into account no matter the context window given the current model architectures.
For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.
I wrote more about this here if you are interested: https://chatbotkit.com/reflections/where-ai-coding-agents-go...
> For a long time I wanted to find some sort of litmus test to measure this and I think I found one that is an easy to understand programming problem, can be done in a single file, yet complex enough. I have not found a single LLM to be able to build a solution without careful guidance.
Plan for solving this problem:
- Build a comprehensive design system with AI models
- Catalogue the components it fails on (like yours)
- These components are the perfect test cases for hiring challenges (immune to “cheating” with AI)
- The answers to these hiring challenges can be used as training data for models
- Newer models can now solve these problems
- You can vary this by framework (web component / React / Vue / Svelte / etc.) or by version (React v18 vs React v19, etc.)
What you’re doing with this is finding the exact contours of the edge of AI capability, then building a focused training dataset to push past those boundaries. Also a Rosetta Stone for translating between different frameworks.
I put a brain dump about the bigger picture this fits into here:
https://jim.dabell.name/articles/2025/08/08/autonomous-softw...
also training data quality. they are horrifyingly bad at concurrent code in general in my experience, and looking at most concurrent code in existence.... yeah I can see why.
The really depressing part about LLMs (and the limitations of ML more generally) is that humans are really bad at formal logic (which is what programming basically is), and instead of continuing the path of making machines that made it harder for us to get it wrong, we instead decided to toss every open piece of code/text in existence into a big machine that then reproduces those patterns non-deterministically and use that to build more programs.
One can see the results in a place where most code is terrible (data science is the place I see this most, as it's what I do mostly) but most people don't realise this. I assume this also happens for stuff like frontend, where I don't see the badness because I'm not an expert.
> is that humans are really bad at formal logic (which is what programming basically is),
The tricky part is that I don't think all programming is formal logic at all, just a small part. And this thing with that different code is for different purposes really screws up LLMs reasoning process unless you make it really clear what code is for what.
> The tricky part is that I don't think all programming is formal logic at all, just a small part.
Why do you say this? The foundation of all of computer science is formal logic and symbolic logic.
Lots of parts are more creative or more "for humans" I might say, like building the right abstractions considering the current context and potentially future contexts. There is no "right/wrong" abstractions, just abstractions with different tradeoffs, and lots of things in programming is like this, not a binary "this is correct, this is wrong", but somewhere along a spectrum of "This is what I subjectively prefer considering these tradeoffs".
There is a reason a lot of programmers see programming having lots of similarities with painting and other creative activities.
> Why do you say this? The foundation of all of computer science is formal logic and symbolic logic.
Yes, but also it has to deal with "the real world" which is only logical if you can encode a near infinite number of variables, instead we create leaky abstractions in order to actually get work done.
Codex (GPT-5) + Rust (with or without Tokio) seems to work out well for me, asking it to run the program and validate everything as it iterates on a solution. I've used the same workflow with Python programs too and seems to work OK, but not as well as with Rust.
Just for curiosities sake, what language have you been trying to use?
Or when code is fully vectorizable they default to using loops even if explicitly told not to yse loops. Code I got a LLM to solve for a fairly straightforward problem took 18 minutes to run.
my own solution? 1.56 seconds. I consider myself to be at an intermediate skill level, and while LLMs are useful, they likely wont replace any but the least talented programmers. Even then i'd value human with critial thinking paired with an LLM over an even more competent LLM.
In my experience, because the Clojure concurrency model is just incredibly sane and easy to get right, LLMs have no difficulty with it.
With the upcoming release of Gemini 3.0 Pro, we might see a breakthrough for that particular issue. (Those are the rumors, at least.) I'm sure not fully solved, but possibly greatly improved.
I feel like this is how normal work is. When I have to figure out how to use a new app/api etc, I go through an initial period where I am just clicking around, shouting in the ether etc until I get the hang of it.
And then the third or fourth time its automatic. Its weird but sometimes I feel like the best way to make agents work is to metathink about how I myself work.
I have a 2yo and it's been surreal watching her learn the world. It deeply resembles how LLMs learn and think. Crazy
Odd, I've been stuck by how different LLMs and kids learn the world.
You don’t get that whole uncanny valley disconnect do you?
> It deeply resembles how LLMs learn and think
What? LLMs don't think nor learn in the sense humans do. They have absolutely no resemblance to a human being. This must be the most ridiculous statement I've read this year
I am sorry, but you are scoffing at the humanity of your kid; you know that, right?
How so? Your kid has a body that interacts with the physical world. An LLM is trained on terabytes of text, then modified by human feedback and rules to be a useful chatbot for all sorts of tasks. I don't see the similarity.
If you watch how agents attempt a task, fail, try to figure out what went wrong, try again, repeat a couple more times, then finally succeed -- you don't see the similarity?
> try to figure out what went wrong
LLMs don't do this. They can't think. If you just one for like five minutes it's obvious that just because the text on the screen says "Sorry, I made I mistake, there are actually 5 r's in strawberry", doesn't mean there's any thought behind it.
I mean, you can literally watch their thought process. They try to figure out reasons why something went wrong, and then identify solutions. Often in ways that require real deduction and creativity. And have quite a high success rate.
If that's not thinking, then I don't know what is.
no I see something resembling gradient descent which is fine but it's hardly a child
No, because an agent doesn’t learn, it’s just continuing a story. A kid will learn from the experience and at the end will be a different person.
You just haven't added the right tool together with the right system/developer prompt. Add a `add_memory` and `list_memory` (or automatically inject the right memories for the right prompts/LLM responses) and you have something that can learn.
You can also take it a step further and add automatic fine-tuning once you start gathering a ton of data, which will rewire the model somewhat.
Perhaps it can improve but it can't learn because that requires thought. Would you say that a PID regulator can "learn"?
I guess it depends on what you understand "learn" to mean.
But in my mind, if I tell the LLM to do something, and it did it wrong, then I ask it to fix it, and if in the future I ask the same thing and it avoids the mistake it did first, then I'd say it had learned to avoid that same pitfall, although I know very well it hasn't "learned" like a human would, I just added it to the right place, but for all intents and purposes, it "learned" how to avoid the same mistake.
A person is not their body.
The person is the data that they have ingested and trained on through the senses that are exposed by their body. Body is just an interface to reality.
That is a very weird and fringe definition of what a person is.
If you have a different life experience than what you had so far, wouldn’t you be a different person?
Yes, it is easy. LLMs have reduced my maintenance work on scraping tasks I manage (lots of specialized high-traffic adfield sites) by 99%
What used to be a constant almost daily chore with them breaking all the time at random intervals is now a self-healing system that rarely ever fails.
One of the uses for AI I'm excited about - maintaining systems, keeping up with the moving targets.
Could you elaborate on your setup please?
Interesting. Could you elaborate? Is there a specific reason that it doesn't do 100% of the work already?
That's the dream
We had a similar realization here at Thoughtful and pivoted towards code generation approaches as well.
I know the authors of Skyvern are around here sometimes -- How do you think about code generation with vision based approaches to agentic browser use like OpenAI's Operator, Claude Computer Use and Magnitude?
From my POV, I think the vision based approaches are superior, but they are less amenable to codegen IMO.
I think they're complementary, and that's the direction we're headed.
We can ask the vision based models to output why they are doing what they are doing, and fallback to code-based approaches for subsequent runs
Unrelated, but thoughtful gave us some very very helpful feedback early in our journey. We are big fans!
Off topic, but because the article mentioned improper usage of DOM, I put down the UK government's design system/accessibility. It's well documented, and I hope all governments have the same standard. I guess they paid a huge amount of money to consultants and vendors.
[1] https://design-system.service.gov.uk/components/radios/
In AI First workshops. By now I tell them for the last exercise "no scrappers". the learning is to separate reasoning (AI) from data (that you have to bring.) and ai coded scrappers seem a logical, but always fail. scrapping is a scaling issue, not reasoning challenge. also the most interesting websites are not keen for new scrappers.
A point orthogonal to this; consider whether you need browser automation at all.
If a website isn't using Cloudflare or a JS-only design, it's generally better to skip playwright. All the major AIs understand beautifulsoup pretty well, and they're likely to write you a faster, less brittle scraper.
The vast majority of the modern internet falls into one of those two buckets though, no?
I mostly scrape government data so the sites are a little 'behind' on that trend, but no. Even JS heavy sites are almost always pulling from a JSON or graphql source under the hood.
At scale, dropping the heavier dependencies and network traffic of a browser is meaningful.
Yeah, reverse engineering APIs is another fantastic approach. They aren't enough if you are dealing with wizards (eg typeform), but they can work really well
IF you can use crawlers, definitely do.
They aren't enough for anything that's login-protected, or requires interacting with wizards (eg JS, downloading files, etc)
If.
Not at all in my opinion. Its a zero sum game against anti bot technologies also employing AI to block scrapers.
You gain experience getting interactions with other agencies optimised by dealing with them yourself. If the AI you rely on fails, you are dead in the water. And I'm speaking as a fairly resilient 50 year old with plenty of hands-on experience, but concerned for the next generation. I know generational concern has existed since the invention of writing, and the world hasn't fallen apart, so what do I know? :)
Your example use case is automatically filling out an IRS form, operated by the sort of IRC department that makes a webform that's only up during business hours? Do you realize how legally risky that is to create, and how legally risky that will be to operate?
What are some of the risks? This is a public web form available on the IRS website
Over the past few days I've spent a lot of time dealing with terribly designed UIs. Some legitimate and desired use cases are impossible because poor logic excludes them.
Is AI capable of saying, "This website sucks, and doesn't work - file a complaint with the webmaster?"
I once had similar problems with the CIA's World Factbook. I shudder to think what an I would do there.
It's funny, one time we had a customer that wanted to use us to test their website for bugs..
Skyvern kept suggesting improvements unrelated to the issue they were testing for
So how do clients process this sort of feedback? As a dev, “negative user feedback” gives me scares that “failed behavior testing” does not.
The AI isn’t mad, and won’t refuse to renew. Unless it’s being run by the client of course.
Are clients using your platform to assess vendors?
No, we don't have a lot of usage in that direction. People mainly use us to log into websites and either fill out forms or download files!
I'd be all over Skyvern if only they had enterprise compliance agreements available.
We do have them! We are HIPAA compliant, have soc-2 type 2 and offer self hosted deployments
Thank you for responding! Where is your compliance information? How do I sign a BAA?
Send me an email suchintan@skyvern.com - we can get you started
this matches our personal experience, too
I tried skyvern like 6 mo ago and it didn’t work for scraping a site that sounds like welp. Ended up doing it myself. Was trying to scrape data across Bay Area.
That said I’d try it again but I don’t want to spend money again.
That 'welp' probably has a tonne of bot detection going on, given its popularity and the sheer amount of data it makes available without an account.
the hardest part of scrapping is bypassing Cloudflare/captchas/fingerprinting etc
Definitely. What are your thoughts on the CloudFlare agent identity
The hardest part is not telling anyone how you're bypassing it!
I can talk about this bypass because they've fixed it: a site I was scraping rolled their own custom captcha that was just multiple choice. But they didn't have a nonce, so I would just attempt all the choices, and one of them would let me in.
The captcha put you on notice that your scraping wasn't authorized. Depending on the details and circumstances, bypassing it and scraping anyways may have been a crime.
I misread this as 'sky scrapers'