Summarizer

Language-Specific Performance

Observations that AI performs better with some programming languages like Python and JavaScript compared to Java, Scala, or enterprise frameworks

← Back to OpenClaw is changing my life

The performance of LLMs in coding appears heavily influenced by the popularity of the language, with developers noting that mainstream options like Python and JavaScript yield much more reliable results than Java or Scala. This disparity often allows developers to quickly bridge gaps in their own knowledge, such as a JavaScript expert using AI to handle complex Python edge cases like rate limiting. However, this success doesn't always translate to specialized fields; LLMs frequently struggle with the nuances of embedded systems and can become trapped in repetitive, trivial errors when tasked with more rigid enterprise frameworks. Ultimately, while AI excels at "filling the gaps" for languages with high-volume training data, traditional technical expertise remains essential for niche hardware and troubleshooting complex logic.

4 comments tagged with this topic

View on HN · Topics
If you want to get into embedded you’d be better suited learning how to use an o-scope, a meter, and asm/c. If you’re using any sort of hardware that isn’t “mainstream” you’ll be pretty bummed at the results from an LLM.
View on HN · Topics
It might be role-specific. I'm a solutions engineer. A large portion of my time is spent making demos for customers. LLMs have been a game-changer for me, because not only can I spit out _more_ demos, but I can handle more edge cases in demos that people run into. E.g. for example, someone wrote in asking how to use our REST API with Python. I KNOW a common issue people run into is they forget to handle rate limits, but I also know more JavaScript than Python and have limited time, so before I'd write: ``` # NOTE: Make sure to handle the rate limit! This is just an example. See example.com/docs/javascript/rate-limit-example for a js example doing this. ``` Unsurprisingly, more than half of customers would just ignore the comment, forget to handle the rate limit, and then write in a few months later. With Claude, I just write "Create a customer demo in Python that handles rate limits. Use example.com/docs/javascript/rate-limit-example as a reference," and it gets me 95% of the way there. There are probably 100 other small examples like this where I had the "vibe" to know where the customer might trip over, but not the time to plug up all the little documentation example holes myself. Ideally, yes, hiring a full-time person to handle plugging up these holes would be great, but if you're resource constrained paying Anthropic for tokens is a much faster/cheaper solution in the short term.
View on HN · Topics
Maybe it is language specific? Maybe LLMs have a lot of good JavaScript/TypeScript samples for training and it works for those devs (e.g. me). I heard that Scala devs have problems with LLMs writing code too. I am puzzled by good devs not managing to get LLM work for them.
View on HN · Topics
I definitely think it's language specific. My history may deceive me here, but i believe that LLMs are infinitely better at pumping out python scripts than java. Now i have much, much more experience with java than python, so maybe it's just a case of what you don't know.... However, The tools it writes in python just work for me, and i can incrementally improve them and the tools get rationally better and more aligned with what i want. I then ask it to do the same thing in java, and it spends a half hour trying to do the same job and gets caught in some bit of trivia around how to convert html escape characters, for instance, s.replace("<", "<").replace(">", ">").replace("\"").replace("""); as an example and endlessly compiles and fails over and over again, never able to figure out what it has done wrong, nor decides to give up on the minutia and continue with the more important parts.