Anybody can send a PCB description/schematic into an LLM, with a prompt suggesting it generate an analysis and it will diligently produce a document that perceptually resembles an analysis of that PCB. It will do that approximately 100% of the time.
But making an LLM actually deliver a sound, useful, accurate analysis would be quite an accomplishment! Is that really what you've done? How did you know you got it right? How right did you get it?
To sell an analysis tool, I'd expect to see some kind of comparison against other tooling and techniques. General success rate? False negative rate? False positive rate? How does it do against simple schematics vs large ones? What IC's and components will it recognize and which will it fail to recognize? Does it throw an error if it encounters something it doesn't recognize? When? Do you have testimonials? Examples?
Hi! This is a totally fair question, and I appreciate you raising it. Getting reliable performance out of an LLM on something as structured as a schematic is hard, and I don’t want to pretend this is a solved problem or that the tool is infallible.
Benchmarking is tricky right now because there aren’t many true “LLM ERC” systems to compare against. You could compare against traditional ERC, but this tool is meant to complement that workflow, not replace it. For this initial MVP, most of the accuracy work has come from collecting real shipped-board schematics (mine and friends’) with known issues and iterating until the tool consistently detected them. A practical way to evaluate it yourself is to upload designs you already know have issues, along with the relevant datasheets, and see how well it picks them up. Additionally, If you have a schematic with known mistakes and are open to sharing it, feel free to reach out to through the "contact us" page. Contributions like that are incredibly helpful, and I’d be happy to provide additional free usage in return.
I’ll also be publishing case studies soon with concrete examples: the original schematics, the tool’s output, what it caught (and what it missed), and comparisons against general-purpose chat LLM responses.
The goal isn’t to replace a designer’s judgment, but to surface potential issues that are easy to miss. Similar to how AI coding tools flag things you still have to evaluate yourself. Ultimately the designer decides what’s valid and what isn’t.
I really appreciate the push for rigor, and I’ll follow up once the case studies are live.
I'm sure your feedback is appreciated, but the tone of your reply is a skeptical engineer with arms crosses. This is a show HN post, and we should support the founder(s) if we think this is a good idea. Clearly a MVP product is not going to check all your boxes, but does it have the potential to be really useful?
I see this idea as a sort of AI ERC/DRC checker that offers some incredible opportunities. Even if it only catches one small, it could save thousand of dollars down the line.
It's another tool in the toolbox for hardware designers.
>> Even if it only catches one small, it could save thousand of dollars down the line.
Or it could send a design team down thousands of dollars in false positives/false negatives. With zero benchmarks provided, it is very fair to question a product that could have material negative impacts on a hardware team.
The tool would ideally classify the output into levels. Just like a compiler or DRC checker. If you submit a clean design, the tool should not be throwing major flags. 99% of the time you should be getting advisory outputs, which should not be tricking any designer. The 1% red flags should easily be understood and if you, as the designer, can't discern them, perhaps you don't understand the fundamentals of your own design.
I've tried it with one of my quick circuits - it does work to some extent. It found a requirement for an IC that I missed in the datasheet. Querying it further did confuse it a bit, instead of talking about the IC it started referring to the MCU and it's limits whilst referring back to the original document.
The real question is whether this has enough value to justify the pricing model [1] - I think so for a company, but would be difficult to justify for a hobby. One thing that should be defined is what "usage limit" actually is.
Netlist.io is a web app that ingests your KiCad/Altium netlist and relevant datasheets so an LLM can reason about the actual circuit. It’s built to catch schematic mistakes that traditional ERC tools often miss, and it can even help debug already-fabbed boards by letting you describe the failure symptoms.
I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
Ingesting data sheets is an interesting angle compared to normal ERC, which KiCAD supports out of the box, but how good is it at the ingesting?
Datasheets themselves are inconsistent and incomplete so I’m wondering how you evaluated the accuracy of the import and what your acceptance criteria is.
Hi! Datasheets can definitely be inconsistent, and that’s one of the tougher parts of doing this well. LLMs are very much “garbage in, garbage out,” which is exactly why the tool doesn’t search the web or pull from any sort of automatic datasheet library. It only reasons from the netlist and the PDFs you upload, so you stay in full control of the context and the primary sources it can pull from. If the datasheet is clear, the results are usually very solid; if the datasheet is vague, the model reflects that instead of pretending otherwise.
I’d really recommend trying it with one of your designs: upload the netlist + a component’s datasheet and ask a specific question about the part in the design. It’s the easiest way to see how well the ingestion works in practice. Would love to hear your feedback after you try it!
From the mistakes actually found and confirmed, how likely do you think they could be progressively transformed into well defined rules that don't depend on LLM?
Would this catch physical interference issues from known components? e.g. conflict spacing, connector pin-out chirality?
I know a brilliant PCB engineer whose first major multimillion dollar R&D corporate design (decades ago) resulted in production of a modular product which couldn't physically plug in with the rest of the system (because of above issues)... I'll send him this link to see if he'll give you feedback, but that's going to be how he'd initially test your AI system (he considers it a humbling lifetime blunder).
Without any PCB design experience, my presumption is that OP's "AI product" is more of just a "fundamentals of circuit board design"[0] and not an all-expansive "how did no human ever catch such a simple multi-dimensional clash"[1]
[0] isolated voltage areas; trace attenuation avoidance; signal protection
[1] the darn thing won't even plug in, because the plug is pin'd-out backwards
Hi! Great question. Right now the tool focuses on issues that show up in the schematic. So it’s very well-equipped to handle a lot of the classic “how did no human ever catch this” mistakes — things like reversed polarity, TX/RX getting swapped, missing pull-ups, etc.
But it sounds like in this case the root cause was more of a footprint/layout issue rather than a schematic one. I’m hoping to add footprint-level checks later on, once I can ingest full board files and mechanical data.
This is a solved problem nowadays. Pretty much every pcb package produces 3D models you can plug into your existing CAD/CAM product design infrastructure.
Pinouts... there is a reason we try to get all pinouts tested as early as possible, preferably on the first non-form-factor prototype spin if we can. In no event should key pinouts be first assigned or major changes made without a planned spin in the schedule following them....
Back in the day our hardware group created a pre-flight checklist before sending boards off to fab. This reduced our errors significantly and got rid of stupid mistakes. Your product idea sounds great and has ton of opportunity for additional features like supply chain analysis, alternate part sourcing, EMC advisory, etc.
Thank you so much! Totally agree. Knowing people in the space to sanity-check designs has saved me countless times. I’m hoping this tool can bring some of that ‘pre-flight checklist’ group wisdom to solo and newer designers as well. Really appreciate the feature ideas too!
Isn’t the primary issue that newer designers don’t know they show run ERC (or that ERC even exists)? Isn’t your tool going to have the same issue? i.e. how do user even know they should run it in the first place? How do you plan to overcome that barrier?
I’m not against more automated checkers, I’m very much for automated checkers, but I’m curious how you plan to not repeat the mistakes of the past.
Hi! If the vacuum tube schematic is designed in KiCad or Altium, then yes! If your design was made in another tool let me know which one and I will do my best to add support for it.
Somewhat related: a while ago I was working on a project and wanted to use an RS485 to TTL conversion board which came with badly translated instructions. However, somebody had reverse engineered the design and uploaded an EasyEDA schematic. I shoved the raw JSON for the schematic (which looked quite cryptic to me) into Gemini 2.5 Pro and asked it if it could understand it, and it cheerfully responded with:
> Of course, Jack. I can understand the schematic from the provided JSON file. It describes an RS485 to TTL Converter Module.
> Here is a detailed breakdown of the circuit's design and functionality
...followed by an absolutely reasonable description of the whole board. It was imprecise, but with some guidance (and by putting together my basic skills with Gemini's vast but unreliable knowledge) I was able to figure out a few things I needed to know about the board. Quite impressive.
I had a really similar experience, which is a big reason why I built this. Uploading my own schematics to the usual web LLMs gave a mix of useful notes and some pretty big misunderstandings. I really believe this tool is set up to deliver better results than the general-purpose GPT/Gemini/Claude interfaces for this kind of task. Hoping others try it and have a much better experience too!
Also good call on processing EasyEDA schematics. I hadn’t considered that initially, but I’m definitely going to add support for it.
In general, there are always "better" solutions to any problem, but finding the right balance for your budget is the key.
If doing industrial work, than consumer-grade workmanship / LLM-slop is usually unacceptable. Start with the FTDI firmware tool and an isolation chip App-note...
I'm your target market - averaging a few dozen board designs a year with complexity ranging from simple interposers to designs at density limits with large US+ FPGAs.
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
- Resistor values for a non-inverting amplifier's gain were swapped leading to incorrect gain.
- A voltage divider supplying a status/enable pin was drawing somewhat more current than it needed to.
- The power rating of a current-sense shunt is marginal for some design conditions.
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments:
- Just this single test consumed 100% of the chat's 330k token limit and 85% of free tier capacity, so I can't even re-evaluate the capability with a more reasonable/detailed prompt, or even giving it the solution.
- A mid-step section calculates the UV/OV behaviour of a input protection device correctly, but mis-states the range in the summary.
- There were several structural errors in the analysis, including assuming that the external power supply and lithium battery share the same input path, even though the netlist and components obviously have the battery 'inside' the power management circuit. As a result most downstream analysis is completely invalid.
- The inline footnotes for datasheets output `4 [blocked]` which is a bare-minimum UI bug that you must have known about?
- The problem and solution were in the context and weren't found/used.
- Summary was sycophantic and incorrect.
You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.
In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
- Worked through each leg of the design and compared it's output to my annotated comments (and was correct).
- Added commentary about possible leakage through a TVS diode, calculated time-constants, part tolerance, and pin loadings which are the kinds of details that can get missed outside of exhaustive review.
- Hallucinated a capacitor that doesn't exist in the design, likely due to OCR error. Including the raw netlist and an unrelated in-context learning example in the dev-message resolved that issue.
So from my perspective, the following would need to happen before I'd consider a tool like this:
- Walk back your data collection terms, I don't feel they're viable for any commercial use in this space without changes.
- An explicit listing of the downstream model provider(s) and any relevant terms that flow to my data.
- I understand the technical side of "Some metadata or backup copies may persist for a limited period for security, audit, and operational continuity" but I want a specific timeline and what that metadata is. Do better and provide examples.
- I'm not going to get into the strategy side of 'paying for tokens'. but your usage limits are too vague to know what I'm getting. If I'm paying for your value add, let me bring an API key (esp if you're not using frontier models).
- My netlist includes PDF datasheet links for every part. You should be able to fetch datasheets as needed without upload.
- Literally 5 minutes of thinking about how this tool is useful for fault-finding or review would have led you to a bare-minimum set of checklist items that I could choose to run on a design automatically.
- Going further, a chat UX is horrible for this review use-case. Condensing it into a high level review of requirements and goals, with a list of review tasks per page/sub-circuit would make more sense. From there, then calculations and notes for each item can be grouped instead of spread randomly through the output summary. Output should be more like an annotated PDF.
Where's the performance data?
Anybody can send a PCB description/schematic into an LLM, with a prompt suggesting it generate an analysis and it will diligently produce a document that perceptually resembles an analysis of that PCB. It will do that approximately 100% of the time.
But making an LLM actually deliver a sound, useful, accurate analysis would be quite an accomplishment! Is that really what you've done? How did you know you got it right? How right did you get it?
To sell an analysis tool, I'd expect to see some kind of comparison against other tooling and techniques. General success rate? False negative rate? False positive rate? How does it do against simple schematics vs large ones? What IC's and components will it recognize and which will it fail to recognize? Does it throw an error if it encounters something it doesn't recognize? When? Do you have testimonials? Examples?
Hi! This is a totally fair question, and I appreciate you raising it. Getting reliable performance out of an LLM on something as structured as a schematic is hard, and I don’t want to pretend this is a solved problem or that the tool is infallible.
Benchmarking is tricky right now because there aren’t many true “LLM ERC” systems to compare against. You could compare against traditional ERC, but this tool is meant to complement that workflow, not replace it. For this initial MVP, most of the accuracy work has come from collecting real shipped-board schematics (mine and friends’) with known issues and iterating until the tool consistently detected them. A practical way to evaluate it yourself is to upload designs you already know have issues, along with the relevant datasheets, and see how well it picks them up. Additionally, If you have a schematic with known mistakes and are open to sharing it, feel free to reach out to through the "contact us" page. Contributions like that are incredibly helpful, and I’d be happy to provide additional free usage in return.
I’ll also be publishing case studies soon with concrete examples: the original schematics, the tool’s output, what it caught (and what it missed), and comparisons against general-purpose chat LLM responses.
The goal isn’t to replace a designer’s judgment, but to surface potential issues that are easy to miss. Similar to how AI coding tools flag things you still have to evaluate yourself. Ultimately the designer decides what’s valid and what isn’t.
I really appreciate the push for rigor, and I’ll follow up once the case studies are live.
I'm sure your feedback is appreciated, but the tone of your reply is a skeptical engineer with arms crosses. This is a show HN post, and we should support the founder(s) if we think this is a good idea. Clearly a MVP product is not going to check all your boxes, but does it have the potential to be really useful?
I see this idea as a sort of AI ERC/DRC checker that offers some incredible opportunities. Even if it only catches one small, it could save thousand of dollars down the line.
It's another tool in the toolbox for hardware designers.
>> Even if it only catches one small, it could save thousand of dollars down the line.
Or it could send a design team down thousands of dollars in false positives/false negatives. With zero benchmarks provided, it is very fair to question a product that could have material negative impacts on a hardware team.
The tool would ideally classify the output into levels. Just like a compiler or DRC checker. If you submit a clean design, the tool should not be throwing major flags. 99% of the time you should be getting advisory outputs, which should not be tricking any designer. The 1% red flags should easily be understood and if you, as the designer, can't discern them, perhaps you don't understand the fundamentals of your own design.
> tone of your reply is a skeptical engineer with arms crosses.
So, just a typical HN comment?
I've tried it with one of my quick circuits - it does work to some extent. It found a requirement for an IC that I missed in the datasheet. Querying it further did confuse it a bit, instead of talking about the IC it started referring to the MCU and it's limits whilst referring back to the original document.
The real question is whether this has enough value to justify the pricing model [1] - I think so for a company, but would be difficult to justify for a hobby. One thing that should be defined is what "usage limit" actually is.
[1] https://netlist.io/pricing
Netlist.io is a web app that ingests your KiCad/Altium netlist and relevant datasheets so an LLM can reason about the actual circuit. It’s built to catch schematic mistakes that traditional ERC tools often miss, and it can even help debug already-fabbed boards by letting you describe the failure symptoms.
I built this because I was tired of shipping boards with avoidable mistakes — hopefully it saves you from a re-spin too!
Ingesting data sheets is an interesting angle compared to normal ERC, which KiCAD supports out of the box, but how good is it at the ingesting?
Datasheets themselves are inconsistent and incomplete so I’m wondering how you evaluated the accuracy of the import and what your acceptance criteria is.
Hi! Datasheets can definitely be inconsistent, and that’s one of the tougher parts of doing this well. LLMs are very much “garbage in, garbage out,” which is exactly why the tool doesn’t search the web or pull from any sort of automatic datasheet library. It only reasons from the netlist and the PDFs you upload, so you stay in full control of the context and the primary sources it can pull from. If the datasheet is clear, the results are usually very solid; if the datasheet is vague, the model reflects that instead of pretending otherwise.
I’d really recommend trying it with one of your designs: upload the netlist + a component’s datasheet and ask a specific question about the part in the design. It’s the easiest way to see how well the ingestion works in practice. Would love to hear your feedback after you try it!
From the mistakes actually found and confirmed, how likely do you think they could be progressively transformed into well defined rules that don't depend on LLM?
Would this catch physical interference issues from known components? e.g. conflict spacing, connector pin-out chirality?
I know a brilliant PCB engineer whose first major multimillion dollar R&D corporate design (decades ago) resulted in production of a modular product which couldn't physically plug in with the rest of the system (because of above issues)... I'll send him this link to see if he'll give you feedback, but that's going to be how he'd initially test your AI system (he considers it a humbling lifetime blunder).
Without any PCB design experience, my presumption is that OP's "AI product" is more of just a "fundamentals of circuit board design"[0] and not an all-expansive "how did no human ever catch such a simple multi-dimensional clash"[1]
[0] isolated voltage areas; trace attenuation avoidance; signal protection
[1] the darn thing won't even plug in, because the plug is pin'd-out backwards
Hi! Great question. Right now the tool focuses on issues that show up in the schematic. So it’s very well-equipped to handle a lot of the classic “how did no human ever catch this” mistakes — things like reversed polarity, TX/RX getting swapped, missing pull-ups, etc.
But it sounds like in this case the root cause was more of a footprint/layout issue rather than a schematic one. I’m hoping to add footprint-level checks later on, once I can ingest full board files and mechanical data.
This is a solved problem nowadays. Pretty much every pcb package produces 3D models you can plug into your existing CAD/CAM product design infrastructure.
3D is pretty solved, yes.
Pinouts... there is a reason we try to get all pinouts tested as early as possible, preferably on the first non-form-factor prototype spin if we can. In no event should key pinouts be first assigned or major changes made without a planned spin in the schedule following them....
Back in the day our hardware group created a pre-flight checklist before sending boards off to fab. This reduced our errors significantly and got rid of stupid mistakes. Your product idea sounds great and has ton of opportunity for additional features like supply chain analysis, alternate part sourcing, EMC advisory, etc.
Thank you so much! Totally agree. Knowing people in the space to sanity-check designs has saved me countless times. I’m hoping this tool can bring some of that ‘pre-flight checklist’ group wisdom to solo and newer designers as well. Really appreciate the feature ideas too!
Isn’t the primary issue that newer designers don’t know they show run ERC (or that ERC even exists)? Isn’t your tool going to have the same issue? i.e. how do user even know they should run it in the first place? How do you plan to overcome that barrier?
I’m not against more automated checkers, I’m very much for automated checkers, but I’m curious how you plan to not repeat the mistakes of the past.
Do you have that checklist still? Can you share it?
Would this tool be able to accommodate vacuum tube designs and the associated schematics, either point to point. Or PCB?
Hi! If the vacuum tube schematic is designed in KiCad or Altium, then yes! If your design was made in another tool let me know which one and I will do my best to add support for it.
Somewhat related: a while ago I was working on a project and wanted to use an RS485 to TTL conversion board which came with badly translated instructions. However, somebody had reverse engineered the design and uploaded an EasyEDA schematic. I shoved the raw JSON for the schematic (which looked quite cryptic to me) into Gemini 2.5 Pro and asked it if it could understand it, and it cheerfully responded with:
> Of course, Jack. I can understand the schematic from the provided JSON file. It describes an RS485 to TTL Converter Module. > Here is a detailed breakdown of the circuit's design and functionality
...followed by an absolutely reasonable description of the whole board. It was imprecise, but with some guidance (and by putting together my basic skills with Gemini's vast but unreliable knowledge) I was able to figure out a few things I needed to know about the board. Quite impressive.
I had a really similar experience, which is a big reason why I built this. Uploading my own schematics to the usual web LLMs gave a mix of useful notes and some pretty big misunderstandings. I really believe this tool is set up to deliver better results than the general-purpose GPT/Gemini/Claude interfaces for this kind of task. Hoping others try it and have a much better experience too!
Also good call on processing EasyEDA schematics. I hadn’t considered that initially, but I’m definitely going to add support for it.
In general, there are always "better" solutions to any problem, but finding the right balance for your budget is the key.
If doing industrial work, than consumer-grade workmanship / LLM-slop is usually unacceptable. Start with the FTDI firmware tool and an isolation chip App-note...
https://www.analog.com/en/products/adm2895e-1.html
Best of luck =3
Oh absolutely -- this was a no-stakes personal project, so I was happy to rely on pre-made solutions and learn a thing or two along the way.
I'm your target market - averaging a few dozen board designs a year with complexity ranging from simple interposers to designs at density limits with large US+ FPGAs.
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments: You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
So from my perspective, the following would need to happen before I'd consider a tool like this: