Autonomous Driving Levels

Everyone has heard of autonomous cars, but did you know that most modern cars are on the autonomous driving scale, here is a simplistic overview of the levels of autonomy.

Officially, the levels of autonomy are classed into 5 levels:

  • Level 1: “Hands on the wheel”: The car has driver assistance functions but under the supervision of a driver. Basically, the car may apply the brakes for you etc.
  • Level 2 “Hands on and off the wheel”: The car is driving itself (automated driving) but the driver is providing full supervision. This what Tesla provide at the moment, although this is creeping to Level 3.
  • Level 3 “Hands off and Eyes off, but…”: This where is gets interesting, the car is driving itself (automated driving) in defined situations but without driver supervision, with driver is required to take the wheel if requested by the system. The System is in charge.
  • Level 4 “Hands off the wheel, Eyes off and Mind Off”: The car is driving itself (automated driving) in defined situations without driver supervision without requiring the driver to take the wheel.
  • Level 5 “Driverless”: The car is driving itself (automated driving)  but without the driver. This is the holy grail of autonomous driving.

Apple Magic Mouse with Ubuntu – Speed up the Scrolling

Recently I realised that I have a spare Apple Magic Mouse hanging round, plus it turns out that Ubuntu 16.04 comes with the Magic Mouse device driver. So to connect the mouse use the normal Bluetooth settings.

The scrolling however was a bit sluggish, so use the following commands to speed up the scrolling on the magic mouse:

To view the current settings:

$ systool -avm hid_magicmouse

To change the parameters:

$ sudo rmmod hid_magicmouse

$ sudo modprobe hid_magicmouse emulate_3button=0 scroll_acceleration=1 scroll_speed=55

To view the change:

$ systool -avm hid_magicmouse

Plus you should see the different when you scroll with your mouse!


Linux Tutorials

Setting Up a Timer with systemd in Linux:

This can be used to start tasks on Linux very much like a cron job, except these offer a bit more flexibility


Example of Team Work

As a cycling fan, I think grand tour cycling is the probably one of the best examples of teamwork there is. The team give everything for their main rider, which could be a sprinter or GC rider (the person riding to win overall). The actual team lead is usually not the main rider, they are doing a job, which usually goes unnoticed, to get their main rider into the winning positions throughout the race (which lasts for three weeks). Everybody has a job to do in the team and gives it their all. If they don’t give it their all the outcome is simple, the team will stand little chance of winning.

How to win a sprint! (Caution: video is absolutely mind-blowing)See it in action at the Tour de France: LIVE on Eurosport and Eurosport Player

Posted by Eurosport on Wednesday, 13 July 2016


Sensor Fusion

This is an old post I wrote a couple of years ago, while it is old, I still believe it is very relevant today.

What is Sensor Fusion?

Sensor fusion is the process by which data from different data sources are fused together to detect something greater than what a single sensor could provide. One form of sensor fusion (which is more sensor enhancement) is with radio telescopes, where a very large array, known as an interferometric array, is formed from many smaller telescopes, to give the effect of being a very large radio telescope. Of course, this is giving the effect increasing the sensitivity of one kind of sensor. Sensor fusion commonly brings together many different kinds of sensors to form a complete picture of the environment in which they are situated, then uses this information to make informed decisions and implement actions as a result.

Where is Sensor Fusion used?

The most common case at the moment is in the automotive industry, where sensor fusion is key to intelligent ADAS systems. For example, the camera, using computer vision, will detect objects and people/animals working out in front of the car. However combine this with radar and the sensor inputs to detect rotation and acceleration, and you have the basis of the car being able to make informed decisions in certain situations, for example when presented with obstacles in its path of which it is travelling. Other things such as when the car is in front of its garage door and opens it automatically, so for convenience and rather than safety critical.

What expertise is required?

These all sounds simple but it brings a level of complexity that is unforeseen. The technology involved in such applications can be vast, as the system contains sensors with embedded MCU software, server/smart hub (or the “brain” of the system) based software collecting the information stitching it together and making decisions, all in a split second. Once the action is decided upon, then actuators and embedded software, maybe, required to implement this action. Or alternatively, this might be to email your garage to say your car needs a service.


  • Microcontrollers
  • Computer Vision
  • Radar/Lidar
  • Middleware Android/Linux expertise
  • Hardware experience
  • UI / UX expertise to relay feedback the driver safely
  • Software Security

What Further Questions might be:

So where would you find this expertise?

Who produces the hardware for this?

Who can integrate these to work as a combined solution?

Who can support this in the long term?

Would you like a discussion on Sensor Fusion…

Please contact me for a discussion on Sensor Fusion, I am very interested in discussing your views and where these kind of technologies are heading.

Email me @Codethink: John Ward

Email me @personal email: John Ward

Twitter: @JohnWardTech


Introduction to Computer Vision

What is Computer Vision?

The field of Computer Vision allows a computer to emulate human vision by showing a perception and understanding on an image or video. This perception can then be used to allow the computer to make decisions, for example detecting the collision of an object coming towards it, or identify parts of an input image such as a face or type of object.

A computer goes through a process of analysing the images or videos information to produce numerical or symbolic data. This data is then used by accompanying computer programs to make decisions on the data.

Computer vision is developing in a number of industry areas, here are some examples below:
  • Automotive industry as part of a driver assistance strategy (ADAS)
  • Medical in the application of better diagnosis
  • Treatment and prediction of diseases
  • Entertainment systems, such as gaming

What could Computer Vision be used for?

Uses for computer vision systems include:

  • Object detection and recognition, then also segmenting the object in the area of interest.
  • Image processing and filtering
  • Navigation
  • Organising information databases of images
  • Part of a great system to detect objects and actions (i.e. a person about to walk out into the path of a moving vehicle)

Computer Vision Examples

See my next post for computer vision examples here (examples coming soon!).
Mobica has excellent experience in Computer Vision, please feel free to contact me for a discussion if this is something that you need assistance with.

Hello world again!

Welcome to my blog re-launched on Amazon Lightsail. My articles will be biased towards on open source subjects, but I will still work on Graphics subjects. I recently joined a new company called Codethink. Codethink are world leaders in Linux and Open Source.

Open Source is a interesting concept, but open source engineers tend to be a lot stronger then engineers that work on proprietary solutions because they are critiqued out in the open and by many engineers, the net result being the quality of engineering stays high.

I will also re-post some of my old articles, if they are relevant/interesting. Hope you enjoy it.