Another potential topic comes to light.

In researching more into the existing work being undertaken in the field of automated vehicles and trust, there is a topic that it, itself, causes a dramatic influence on that of social acceptance and trust of automated vehicles; The Moral Machine.

This social experiment has been around for a couple of years and has been gauging people’s viewpoints on how an automated vehicle should react in a sever, split choice situation. As an example, if an automated vehicle was to be going at a rapid speed and there, in front of the vehicle, is a set of pedestrians crossing the road. On the wrong side of the road for the vehicle, is 1 person, on the correct side of the road, there is 2. The vehicle now has to make one of three decisions because stopping safely is no longer an option. Should the vehicle a) drive into the 2 people, b) drive into the 1 person, c) drive itself off into whatever environmental object the vehicle can find, potentially killing the occupants of the vehicle.

This has been a huge talking point even before the realistic emergence of an automated vehicle, mainly within automated systems used across the industrial sector.

This whole area of understanding and social experiments opens an important door to analyse, however with a major caveat which I shall explain shortly. The current social experiment involves asking people what the vehicle should do based on an abstract image identifying the issue and the outcome. This could raise a concern that those completing said experiment may feel no risk in choosing either option due to their abstractness from the entire situation because it is purely theoretical. In flipside, I could utilise the existing research shown from this study, but attempt to validate or counteract the results, dependant on the outcome of the research I conducted. The research could involve understanding humans decisions in a virtual, real-time environment. This brings an interesting alternative that could improve how we understand social norms during critical conditions.

There is a number of arguments against this potential area of research. Firstly, there is the ethical question of should we really be putting people inside a virtual reality space, then asking them to decide who should be safe and who should not, is that abstractness from the moral machine rightly justified as an ethical line? Well, the argument against this would be that due to it being in a virtual reality space, it still has a level of safety and abstractness that doesn’t actually harm anyone involved (theoretically).

The past point brings me onto the argument against that, which is what is the real point of still having an abstraction that still leaves participants feeling safe and therefore, what’s the real point of the research. My main argument here is the idea of changing the situation to a real-time environment, this will determine what people really socially justify under rapid conditions.

This is still not yet a research topic that is confirmed to be something I will undertake, but it’s another on the cards that needs evaluating and justifying down the line.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s