Skip to content

Automotive embedded software | Development engineering for software (personal views)

  • Well we all have been there at one time or the other, LLMs seem to have all the answers, have you wondered how is it possible? I have and have tried to understand LLM better, after all they are our (s/w teams) fre-enemy (friend or enemy based on situation)

    For starters LLMs are mere token generators which are trained by companies like Meta/Anthropic/Open AI with all the internet data available so far and continue to train them. So the LLMs have ingested all this and created associations between the different tokens and chunks of tokens, those who worked with NLP (Natural Language Processing) will know about words, embedding and vectors, the LLM is an extension to this but at a massive scale.

    If LLMs were to be provided to us as is, we will not be so impressed, it hallucinates at times, gives wrong information etc. What companies have done is to append these LLMs with additional tools, for e.g. when you use github co-pilot and provide it a piece of log to analyze, it analyses the same by running different commands (with users permissions) , this LLM + additional tools make them agents, more useful to solve the problem at hand.

    IMHO LLMs or agents in themselves are not intelligent. I would like to quote an example from work, an yocto build was failing, the github co-pilot identified the reason for the failure, only on a human nudge it wrote a script to download the failing packages, github co-pilot could not think of this, it was the human in the loop who had to give this idea, but once the idea is given, within next 10 mins it generated a shell script to download the failing package. So human in the loop is very much essential, someone who has her/his own ideas to try, the laborious work of writing code to try ideas, this is solved by the agents.

    I have also witnessed agents taking over the developer’s mind and time, it sends the developer into a rabbit hole if used without stepping back every now and then, the end result is a massive token usage payment.

    So agents must be used like one, the human must know if the output seems right or not, if the human is clueless of expected end result how it looks, then agent and human together are burning tokens and time.

    Most important is excessive reliance on agents, leads to cognitive dis-ability , loss of self confidence to solve the problem at hand.

    Analogously if a carpenter does not know his trade well enough, on how to use which tool and when, then giving him an agent will never make him a better carpenter.

    So we all need to continue to learn and then rely on agents to do heavy lifting, the agent must in our control and not the other way around.

    Happy llm’ing

    Why was I so impressed with LLM (Large Language Models)

    –––––––

    May 3
  • Electric Vehicle (EV) produces zero emissions, and also zero noise, the zero road noise seems like a cool factor. But in reality this “Silence of the EVs” make it impossible for other road users to know that there is a vehicle at a distance, this is unlike ICE (Internal Combustion Engines) vehicle which make a pass-by noise.

    This pass-by noise is especially important for Vulnerable road users (VRUs) like pedestrians, parents with strollers, teens with headphones for their safety.

    The technical solution is to have an external speaker which connects to the infotainment ECU and plays an speed dependent humming noise which alerts VRUs. This is regulated in the EU and mandatory from 2019 , the link to the same can be found at https://single-market-economy.ec.europa.eu/news/electric-and-hybrid-cars-new-rules-noise-emitting-protect-vulnerable-road-users-2019-07-03_en

    I would like to urge the major Indian OEMs like https://www.instagram.com/mahindraelectricsuvs/ and https://www.instagram.com/tata.evofficial/ to explore this possibility to improve VRU safety. In addition I would like to request https://www.instagram.com/arai_india/ to add it to the mandate of homologation if feasible.

    Silence of the Electric Vehicles in dense urban network in India

    –––––––

    Apr 27
  • Well here I have only one experiment to share unlike Mahatma Gandhi’s “My experiments with Truth” which is worth a read if not already.

    I am building a small FreeRTOS based application, as a rite of passage, got to write the “Hello World” which is a big confidence booster and a dopamine hit. As you can imagine the “Hello World” task was not not willing to say “Hello World”. I tried few more things with no luck of seeing the elusive.

    Then i recalled a friends’ conversation that he has completely stopped changed development style with LLM, I thought I will give it a stab. I purchased a 20 $ github co-pilot , installed as a plugin into VSCode.

    I selected the part of the code which was not working as for review comments, the LLM generated token, all looks good. Then I requested LLM to check why either task creation or task failure was not printing out anything. It started analysis all the files in the workspace, asked me for permission to fix the code, removed all the experimental code I had inserted while debugging, ran the build each time and lo behold the “Hello World”.

    Really mind-boggling, but then when I look the code again, formatting has changed, style has changed and I can barely recognize this as my style of code. So I am wondering what is the impact on maintainability and understandably of code in the long run or we doomed to rely on LLMs for the same, much like we don’t remember anyone phone numbers now ! So development will be abstracted away like a name on the phone under which lies the real phone number.

    #Handwritten #NoAI

    My Experiment with Git hub co-pilot for development

    –––––––

    Apr 20
  • What is Yocto

    It is an package system

    Provide configuration management of different open source and closed source packages

    Helps to build complete product software

    Lastly it is a swiss army knife for software development

    Has a steep learning curve

    What Yocto is not

    It is not a distribution of linux

    If you are developing a embedded linux product and would like to the entire product code to be built from scratch then Yocto is the answer. It comes with a concept of recipes which are abstraction on top older build systems like make or cmake. Using yocto different product variants can be configured and built. Yocto also generates BOM (Bill of Materials) including spdx if provided at each package level. Yocto fetches diferent packages from many open source , builds them from sources, installs them into the root-fs. Yocto can also build the linux kernel.

    So in summary it is a powerful tool for product development with a steep learning curve, once mastered you will love its power.

    May the force be with you !

    #NoAI has been used to write this note.

    Gentle introduction to Yocto

    –––––––

    Apr 14
  • Well if you are developing an micro controller based product, there are couple of ways to proceed, one is to develop the entire application in one single infinite loop and manage the timing yourself by fine tuning.

    Another approach is to use an RTOS (Real Time Operating System) and let it handle the timing constraints for the application, so as a developer you can focus on the application logic.

    One of the popular choices is FreeRTOS which is more or less available for all combination of compiler and MCU variants even including RISC V. Now with this approach, there are a few choices to be made.

    Use FreeRTOS without paying any royalty and go into production, it comes with MIT license which is permissive.

    If your company has a blanket agreement to not use any kind of open source, then you can for the OpenRTOS which is fully compatible with FreeRTOS, it is the exact same kernel under a commercial license.

    If the product in question is Safety product , then the choice is to use SafeRTOS so the safety certification of the kernel is taken care by https://www.highintegritysystems.com/.

    Happy development !

    FreeRTOS vs. SafeRTOS vs. OpenRTOS

    –––––––

    Apr 5
  • As we have entered an era where everyone are chatting with LLMs to share their emotions, troubles and joys leading to unprecedented levels of breach of personal data privacy. This is similar to the initial days of social media where everyone thought what is the big deal about sharing some pics, travels and events. Now those of us who have shared data to social media are to a large extent addicted to it and can’t get away from it. We are at the cusp of a similar situation with LLMs where we may share our inner most feelings, fears. A day may come when we can’t live without these LLMs and our human to human interactions keep coming down. This reliance on LLMs leads to more isolation and even more usage of LLMs in an endless loop.

    To be able to improve human to human communications it maybe a good idea to go back to basics and understand human psychology. An excellent course on this topic by Yale from Prof Paul Bloom is available on Coursera for a the price of couple of Starbucks coffee https://www.coursera.org/learn/introduction-psychology

    The course very well explains in an easy to understand fashion evolution of the science behind the human mind, the quirks and a few fascinating aspects of the human minds. The course ends with a module on latest development in this field of science.

    Happy learning ! do let me know what you think about this in comments.

    LLM and Human interactions

    –––––––

    Mar 26
  • Now there is such an hype about Agentic , LLM, SLM, motlbook, AGI etc. it is easy to think and start to believe that there is no need of software developers at all and somehow we will 100x or even 1000x productivity from developers and test engineers.

    I have been working in automotive embedded can say for sure that this skill is not going to AI’fied meaning AI taking over. The reasons are as follows:

    1, Automotive embedded products need to be safety certified to varying degrees depending on the criticality or the SIL levels.

    2, Automotive products which go on the road can harm others and can cause malfunction so there are regulations before a new vehicle can be launched into any market

    3, The LLMs are mostly trained on web and application code as embedded code is not available on internet, they are all in repos of Tier-1 and OEMs and never in public domain.

    4, OEMs can’t risk their brand reputation to an LLM generated code so they will likely put checks and balances. I know a few OEMs who are giving licenses to LLM agents, but them they have so many component tests in the path , it is lot easier to write human code and get it main-lined than using LLM code and keep getting into debug-loops of failed main line test cases.

    The best use of LLMs is as a search engine to validate what you already know, if the user of LLM can’t know if the LLM output is correct or not, then better to learn the hardway and only use LLM as an aid rather than as a main source of knowledge.

    What do you think, comment below.

    LLM in Embedded S/w development

    –––––––

    Mar 16
  • We all have read a lot about the Air Quality Index and its effect on respiratory health in specific. Have we even wondered how much RF radiation we are exposed to and what impact it has one our cells. Those from cellular biology know that strong RF radiation ionises the contents of the cell contents.

    Add to this the GHz we are subjected to due to 5G and upcoming 6G. Each country has set limitations on the RF exposure within their boundaries and every device needs to comply with EMC regulations before it can be given CE or FCC certifications. That said there is not much we can do protect ourselves, expect for one thing.

    The one thing is wearables, they emit massive amount of radiations and us wearing them 24×7 exposes us to these high levels of radiations directly into our body. So my suggestion if I may is to minimise the usage of wearables for the purpose it was bought for. For e.g. there is no need to carry phone (also another Radio transmitter) and a watch together.

    Happy responsible usage of RF devices in and around you and your family.

    AQI for RF

    –––––––

    Mar 2
  • Those of you in the Autonomous Driving for sure have heard this announcement. Mercedes has also tied up with Nvidia to bring this model to the roads which offers Level 4 autonomy. This launch could possibly happen in CA state along side other incumbent A/V platforms like Tesla, Waymo and Uber.

    What is the key difference here:

    1, For now, this model is based on LLM, yes you heard it right Large Language Model, just like Tesla started to solved the self driving problem using Neural Networks, Nvidia has embraced LLM. The chosen LLM is llamma which is from Facebook, which is open weights and open to train unlike Claude or ChatGPT. The premise is that the LLM knows the world and using this it can guide a vehicle autonomously.

    2, Nvidia is relying on Map providers to do the route calculation, tokenize the direction and give it as input to the LLM

    3, It seems that the Alpamayo will rely only on camera which is also a big deal and is a directional change from Waymo which is using LiDAR to better understand the world.

    4, This approach needs to handle hallucinations from LEM which is known issue, so additional layer of reinforcement learning is needed to solve this issue.

    Last but not the least Nvdia is giving this to the community for free at the following link https://huggingface.co/nvidia/Alpamayo-R1-10B , of course usage if this does a vendor lock-in to NVIDIA ecosystem.

    It remains to be seen on how this approach works in real world.

    Alpamayo from NVIDIA

    –––––––

    Feb 15
  • As today’s Human Machine interfaces are getting more complex and intuitive, it is super important to have a mechanism to do regression testing in an automated fashion to minimise human error and reduce fatigue.

    Squish from Qt is one such tool which can do the job. It supports embedded device testing or even desktop application testing.

    Squish allows:

    1, Record and playback a test script

    2, Write a Python script to do the automation using Squish framework

    3, Uses Eclipse IDE which most software folks are familiar with.

    4, It allows a development kind of interface to developing automated scripts for HMI validation

    Happy squishing !

    Link at https://www.qt.io/quality-assurance/squish

    Squish It

    –––––––

    Jan 17
Next Page

Blog at WordPress.com.

  • Subscribe Subscribed
    • Automotive embedded software | Development engineering for software (personal views)
    • Already have a WordPress.com account? Log in now.
    • Automotive embedded software | Development engineering for software (personal views)
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar