#0108: Machine intelligence and autonomy

Braingasm

Matthew Sinclair
4 min readSep 24, 2019
Photo by Thiago Cardoso on Unsplash.

[ED: It’s been a busy month or so, first doing a well deserved whole lotta nothing during August, and then the inevitable uptick of activity after the break. Normal service should be resumed shortly.]

I’ve been doing some thinking recently about the impact of machine intelligence. A seemingly obvious conclusion to draw from the rollout of intelligent machines is that the more intelligent they get, they more autonomy we want to give them. Or at least, the more autonomy that some of us want to give them. There is a view that this is a priori a Good Thing™. But is it?

If we are going to hand over autonomy to intelligent machines, then it would seem to make sense to have some understanding about what that actually means. Perhaps it makes sense to have a simple framework to think about the types of autonomy. Ok, so what about this:

When thinking about the impacts of machine intelligence, it can be useful to consider three different types of autonomy:

  • Analytical Autonomy
  • Predictive Autonomy
  • Behavioural Autonomy

Analytical Autonomy is the idea that we are happy to let machines analyse data on their own. This should be relatively easy to understand because it is what machines (even unintelligent ones) do all of the time. They take some inputs and perform some calculations on those inputs to provide an output. After that, it’s up to humans to make a judgement on the output and do something with it. A simple example of analytical autonomy might be a lane divergence warning in a vehicle that alerts the driver if the vehicle momentarily moves out of its lane. To do this, the system would need to use some simple computer vision technology to recognise lane markings and be able to understand where the vehicle was in space in relation to those line markings.

Predictive Autonomy is the idea that we are happy to let machines make a prediction about something, based on some input data. It is one thing for a machine to do a calculation about something that happened in the past, but it is another thing altogether for the machine to do some analysis, then make a prediction, and then for a human to rely on that prediction, potentially without fully understanding the exact details of how the prediction was derived. An example of predictive autonomy might be when a vehicle driving system observed that a vehicle ahead of it was about to move into a position that might compromise the current vehicle. To perform this, the system would need to use computer vision to analyse the space around the vehicle, be able to identify and track other vehicles in that space relative to the vehicle being controlled, and then infer when the trajectory of any of those vehicles was likely to pose a danger.

Behavioural Autonomy is the idea that we are happy to not only let a machine analyse some data and make a prediction, but to then let the machine go on and act on that prediction in a way that manifests in the physical world. Continuing the vehicle example, behavioural autonomy would be when the vehicle’s auto-drive system engaged or overrode the driver to slow down the car or even automatically change lanes in order to avoid a predicted collision.

If we are to hand over control of systems in the real world to intelligent machines, then it is probably worthwhile understanding the degree of autonomy that we are going to give them.

Analytical autonomy requires little additional oversight because any manifestation of change from that analysis is still in the hands of humans. Predictive autonomy poses a bigger challenge because although there is still a human making the final decision, there could well be situations where the reasons for those predictions are opaque. For example, most deep learning systems have the property that although they may be very good at making accurate predictions, they are not (generally) able to provide human-understandable reasons for those decisions. This can be problematic in fields like wealth management where there are laws that require advice given by financial analysts be subject to audit.

Behavioural autonomy is the category that presents the most challenges. The obvious example with autonomous vehicles is the question of fault when something goes wrong. Is it the person in control of the vehicle that is at a fault? The manufacturer of the vehicle? The author of the software? Who knows?! Completely behaviourally autonomous technology of this kind, should it ever eventuate, will almost certainly be running ahead of any legal, ethical, and legislative debates.

As a software engineer working with these kinds of technology, it may just be worthwhile being explicit in advance about the type of autonomy that the system will manifest, and as you move up to predictive and behavioural autonomy, spend a bit more time thinking about the consequences of failure than you otherwise might if it were just humans in the loop.

Regards,
M@

[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]

--

--