Using the UIA tree as the currency for LLMs to reason over always made more sense to me than computer vision, screenshot based approaches. It’s true that not all software exposes itself correctly via UIA, but almost all the important stuff does. VS code is one notable exception (but you can turn on accessibility support in the settings)
Agreed. I've noticed ChatGPT when parsing screenshots writes out some Python code to parse it, and at least in the tests I've done (with things like, "what is the RGB value of the bullet points in the list" or similar) it ends up writing and rewriting the script five or so times and then gives up. I haven't tried others so I don't know if their approach is unique or not, but it definitely feels really fragile and slow to me
I noticed something similar. I asked it extract a guid from an image and it wrote a python script to run ocr against it...and got it wrong. Prompting a bit more seemed to finally trigger it to use it's native image analysis but I'm not sure what the trick was.
I recently tried using Qwen VL or Moondream to see if off-the-shelf they would be able to accurately detect most of the interesting UI elements on the screen, either in the browser or your average desktop app.
It was a somewhat naive attempt, but it didn't look like they performed well without perhaps much additional work. I wonder if there are models that do much better, maybe whatever OpenAI uses internally for operator, but I'm not clear how bulletproof that one is either.
These models weren't trained specifically for UI object detection and grounding, so, it's plausible that if they were trained on just UI long enough, they would actually be quite good. Curious if others have insight into this.
Me too! With Sendkeys and some Win32 API calls, I wrote an AOL add-on (available through Keyword: addons) called AoLOL!. It was my first software business.
Q: How do you identify the AOL window?
A: Look for an app with titlebar = "America[space][space]Online"
And BeOS/Haiku with the "Hey" command which does literally the same, but far more than key input. You can interact with widgets too.
Under Unix, there's xdotool and friends.
Looks awesome. I've attempted my own implementation, but I never got it to work particularly well. "Open Notepad and type Hello World" was a triumph for me. I landed on the UIA tree + annotated screenshot combination, too, but mine was too primitive, and I tried to use GPT which isn't as good at image tasks as Gemini as used here. Great job!
Working on something very similar in Rust. It's quite magical when it works (that's a big caveat, as I'm trying to make it work with local LLMs). Very cool implementation, and imo, this is the future of computing.
Using the UIA tree as the currency for LLMs to reason over always made more sense to me than computer vision, screenshot based approaches. It’s true that not all software exposes itself correctly via UIA, but almost all the important stuff does. VS code is one notable exception (but you can turn on accessibility support in the settings)
Important is subjective — In the healthcare space, I’d make the claim that most applications don’t expose themselves correctly (native or web).
CV and direct mouse/kb interactions are the “base” interface, so if you solve this problem, you unlock just about every automation usecase.
(I agree that if you can get good, unambiguous, actionable context from accessibility/automation trees, that’s going to be superior)
Agreed. I've noticed ChatGPT when parsing screenshots writes out some Python code to parse it, and at least in the tests I've done (with things like, "what is the RGB value of the bullet points in the list" or similar) it ends up writing and rewriting the script five or so times and then gives up. I haven't tried others so I don't know if their approach is unique or not, but it definitely feels really fragile and slow to me
I noticed something similar. I asked it extract a guid from an image and it wrote a python script to run ocr against it...and got it wrong. Prompting a bit more seemed to finally trigger it to use it's native image analysis but I'm not sure what the trick was.
I've run into this with uploading audio and text files, have to yell at it to not write any code and use it's native abilities to do the job.
I recently tried using Qwen VL or Moondream to see if off-the-shelf they would be able to accurately detect most of the interesting UI elements on the screen, either in the browser or your average desktop app.
It was a somewhat naive attempt, but it didn't look like they performed well without perhaps much additional work. I wonder if there are models that do much better, maybe whatever OpenAI uses internally for operator, but I'm not clear how bulletproof that one is either.
These models weren't trained specifically for UI object detection and grounding, so, it's plausible that if they were trained on just UI long enough, they would actually be quite good. Curious if others have insight into this.
Most Electron software doesn't follow accessibility guidelines and exposes nothing over UIA
Cool. Reminds me of using SendKeys() in Visual Basic 6 in the 90s
https://learn.microsoft.com/en-us/dotnet/api/microsoft.visua...
I loved SendKeys()!
Used it to write programs that would run in the background & spook my friends by "typing" quotes from movies at random times on their computer.
SendKeys() in VB powered basically all of the AOL chat bots in the 90’s.
It’s how I accidentally learned the Win32 API
Me too! With Sendkeys and some Win32 API calls, I wrote an AOL add-on (available through Keyword: addons) called AoLOL!. It was my first software business.
Q: How do you identify the AOL window? A: Look for an app with titlebar = "America[space][space]Online"
And BeOS/Haiku with the "Hey" command which does literally the same, but far more than key input. You can interact with widgets too. Under Unix, there's xdotool and friends.
I feel vaguely vindicated that the agent can't figure out how to use the modern Save as workflow, either, and reverts to the traditional dialog.
Looks awesome. I've attempted my own implementation, but I never got it to work particularly well. "Open Notepad and type Hello World" was a triumph for me. I landed on the UIA tree + annotated screenshot combination, too, but mine was too primitive, and I tried to use GPT which isn't as good at image tasks as Gemini as used here. Great job!
Very cool - does anyone know of an OSX equivalent?
Preferably one that is similarly able to understand and interact with web page elements, in addition to app elements and system elements.
There are MCPs that work with the macOS Accessibility stack, like https://github.com/steipete/macos-automator-mcp, https://github.com/ashwwwin/automation-mcp, https://github.com/mediar-ai/MacosUseSDK, and https://github.com/baryhuang/mcp-remote-macos-use.
For web page elements, you could drive the browser via `do JavaScript` or use a dedicated browser MCP (Chrome DevTools MCP, Playwright MCP).
Working on something very similar in Rust. It's quite magical when it works (that's a big caveat, as I'm trying to make it work with local LLMs). Very cool implementation, and imo, this is the future of computing.
I remember an older friend asking me recently; will there be a thing soon where I can make my computer go on auto-pilot?
I guess I can answer, "yes I think so."
genuinely asking, what do you think are the use cases for someone requiring this?
Can it farm a ber rune for me?
Yeahh computer-use agents remind me of game automators like RuneScape autoclickers back in the day like SCAR: I posted on this a while back haha https://news.ycombinator.com/item?id=29716900#29720860
LLM’s do a pretty good job of using pywin32 for programs that support COM like office.