I use AI for a few hours every day, primarily for software and electronics design, troubleshooting, and searching through massive amounts of data. I’m amazed by how quickly AI has evolved over the past year—but also shocked by the sheer amount of mediocre and misleading output it generates. AI, as most people understand it, operates on a language-based understanding and isn’t necessarily technically correct. It lacks system-level comprehension and fails to account for environmental dependencies such as memory leaks, real-time system constraints, drift and tolerances, aging, thermal expansion, ESD, EMC, lightning surges, vibrations, resonance frequencies, water absorption, surface roughness impact on design, thermal radiation, and so on. It also doesn’t truly understand use cases, including how users or unforeseen events can deviate from expectations.
Additionally, AI lacks DFM (Design for Manufacturability) expertise and real-world experience. It’s not necessarily the best algorithm for solving certain problems (for example, FFT for frequency detection, even though a neural network could be trained for this purpose).
AI-generated designs are based on patterns learned from the work of average engineers (with IQs typically between 120-140, varying experience levels, and the occasional bad day when a design was created), as well as past AI-generated designs—which in some cases is highly problematic. As a result, AI rarely produces work that surpasses the average; it just does it faster. In my personal experience, AI generates vast amounts of output that could take a lifetime to verify, which is a major issue in safety-critical designs. It often makes bold claims and sounds highly convincing, even when incorrect. Many times, I’ve asked AI to verify its results, cross-check with sources, or provide references, only to find that the links it provides don’t actually support its claims. It’s only when confronted with concrete proof—like a specific parameter in a datasheet—that it acknowledges its mistakes. This is unsettling and, if undetected, could lead to serious consequences. I’ve spent a significant amount of time correcting AI-generated errors.
On a different note, I personally enjoy designing mechanical components, and I believe many ViaCAD/SharkCAD users—including artists, designers, and semi-professional hobbyists—would agree. AI-driven automation removes the creative aspect that makes designing enjoyable, making it a no-go for a program with this kind of user base. For these users, stability is far more important than AI features. Nobody wants their software to crash and wipe out hours or even weeks of work, especially in their valuable free time. The creative process is frustratingly disrupted when you have to redo everything from scratch.
Based on my experience, the development team behind this software seems relatively small, and it’s possible that much of the work is handled by a single person with support from remote teams. It can be that AI (which is new and exciting) has not been received as positively among users as expected, and that might have influenced development priorities. Regardless, I have deep respect for the work that has gone into this program, and despite its flaws, ViaCAD/SharkCAD remains a fantastic piece of software.
I sincerely hope the developer behind the software continues its development, and as long as updates remain reasonably stable, I’ll keep purchasing them.
Edited by user Sunday, March 9, 2025 2:50:27 AM(UTC)
| Reason: clarification