Subscribe
Want more UOW feature stories delivered to your inbox?
AI is becoming more sophisticated and omnipresent, encroaching on our everyday lives and disrupting the workplace.
To prepare our kids for the future, education experts say we need to teach them for life in the digital age: how to sift through and critically assess the surplus of information, problem solve, and be resilient when faced with fast-moving change.
But we're also living and working longer, and change is coming by the decade. Artificial intelligence is already poised to transform some industries and professions with algorithms capable of as well as dermatologists and mastering music like an audio engineer.
By encroaching on our lives, artificial intelligence has provoked deep questions of what makes us human. More directly, as technology infiltrates the workplace, it is also influencing what skills and capabilities are deemed valuable by employers.
So how we can thrive in this changing world?
Once the stuff of science fiction, (AI) is now part of our every day. Most of us would use multiple devices as we go about our work and daily life. Many of those interactions are already based on autonomous algorithms which learn from patterns in data inputs to perform tasks that simulate human decision-making.
"AI exists in many ," says Anco Peeters, a philosophy student in the ÁñÁ«ÊÓƵapp of ÁñÁ«ÊÓƵapp's (UOW) School of Humanities and Social Inquiry, specialising in human cognition and artificial intelligence.
"There are , like Amazon's Alexa and Apple's Siri. We predict the weather and traffic with AI algorithms, and the is based on it too. The use of AI will continue to grow and increase but it's already very much integrated in how we live."
"By nature and by design, embedded in technology is a particular person's view of the world."
- Dr Sharna Wiblen
In his PhD, Mr Peeters is exploring how our understanding of the mind is influenced by our relationship with technology.
"Smart technologies have not only become ubiquitous, they have also to a large extent become invisible," he says. "Our relationship with technology has become very intimate in a sense that it's all around us. It's gathering data about us, about even our most private and mundane things."
Our incessant use of technology suggests that we're okay with that - quick to agree to terms and conditions without reading the fine print when it means we can have news, apps and social media at our fingertips - until artificial intelligence crosses a line that makes us feel uncomfortable, threatened or even compromised.
When Google demonstrated the capability of its digital assistant to make an appointment over the phone, responding seamlessly to queries with casual conversation, technology enthusiasts . The person who answered the call seemed totally unaware that they were not in fact speaking with another human, and that made others .
Outsourcing mundane tasks to computer technologies is one thing but a computer program impersonating a human without detection (which has long been a measure of artificial intelligence, albeit ) incites difficult questions of the .
The division of labour between humans, machines and algorithms in the workplace is another pressure point.
Historically, technological progress has been confined to the mechanisation of manual tasks in pursuit of greater efficiency or technical precision. Conveyor belts were introduced into supermarkets to speed up check outs and robots entered assembly lines and operating theatres to assist manufacturers and surgeons.
Artificial intelligence, however, goes one step further, edging towards more complex cognitive tasks by mining big data sets deeper and faster than humans could or scanning the environment without distraction. In this way, AI is legal and financial services, healthcare, and transport.
Dr Sharna Wiblen, Lecturer at UOW's Sydney Business School, argues this wave of technological innovation is not unlike years gone by. Whether it's AI or automation, , not people or entire jobs, she says.
"AI is displacing the task but the people themselves are still absolutely employable," says Dr Wiblen, who specialises in talent management, analysing in her research how organisations identify, recruit and develop talented employees.
She says people need to be adaptable and embrace change in their careers. "We only need to go back 20 years when people thought they were going to do one job for the rest of their lives. That was the mindset we had as a society."
"Every generation goes through huge changes. Every industry will change. You have to be agile." - Dr Sharna Wiblen. Photo: Paul Jones
Now new graduates will multiple times in their careers, which may span half a century by the time they retire.
"What happens then to people who say, 'But I'm an accountant and this is what I am'? Our focus should be on their mindset," Dr Wiblen says. "Every generation goes through huge changes. Every industry will change. You have to be agile."
That said, the AI incursion is happening faster than many people might feel comfortable with and no workplace is immune to the influence of technology. With machines expected to per hour on task in the workplace by 2025, we will have to learn to work alongside them.
Dr Wiblen emphasises the opportunities that exist: for personal development as technology replaces routine tasks and affords employees the time to focus on more strategic initiatives, and for conversations about what skills will be valued by organisations - and how people make those decisions.
In the , people will still be an organisation's most valuable resource. Creative and critical thinking will be imperative, as will interpersonal skills and emotional intelligence used to negotiate, persuade or work with others.
"Although we have all these fantastic technological advancements, we're still talking about humans dealing with humans and working with humans," Dr Wiblen says. "We can't take that human element out of society and how we work with each other."
We can't take the human element out of recruitment either.
"Someone is making a judgement call about the value of another human being," Dr Wiblen explains. "We don't transact with companies and organisations. We engage with other human beings who have designed that organisation or told us what it stands for." And with that comes their perception of what makes the right candidate for the job.
"Technology has its allure but if we think people are making poor decisions, why don’t we invest in the humans?"
- Dr Sharna Wiblen
In her research, Dr Wiblen has that managers have subjective and widely different understandings of what constitutes talent and how to go about identifying it.
When looking for a talented employee, she says people might appreciate prowess, flair, knack or natural aptitude - admirable but ill-defined qualities - but they also impart their own perceptions of what success looks like onto the candidate based on their personal attitudes or previous experience.
"We often talk about recruiting someone who is a good fit but you might have a whole lot of perceptions about me because you think I look like you," Dr Wiblen says, highlighting that the notion of a 'good fit' isn't necessarily based an individual's ability to contribute to the organisation's strategic goals.
Organisations have tried to use AI technology to improve recruitment and talent management but algorithms aren't impartial either. Amazon, for example, its recruitment tool that compared the resumes of applicants to those of previous recruits when it began to down rank women. Inevitably, the dataset reflected our social biases.
"By nature and by design, embedded in technology is a particular person's view of the world," Dr Wiblen says. "Technology has its allure but if we think people are making poor decisions, why don't we invest in the humans?"
"Artificial intelligence allows us to investigate the true capacity of human cognition."- Anco Peeters. Photo: Paul Jones
Artificial intelligence can make us feel uncomfortable because it forces us to ask difficult questions of ourselves and reveals truths about our society. But asking those questions also shows how human intelligence differs so greatly from artificial neural networks that mimic our minds.
"You can only copy or create the mind through a computer if you think that there is some deep resemblance going on there," says Mr Peeters of the dominant theory in philosophy and science that the human mind processes information like a computer.
What we can do that AI cannot is reflect on past decisions and critique our own behaviour through introspection.
"AI cannot reflect. It's not self-aware," explains Dr Wiblen. "It makes decision after decision, tweaking its response to move forward, but it cannot ask, 'Was that effective?' It's called artificial intelligence for a reason."
We can apply this human reflexivity in the workplace to suggest improvements to work processes, consolidate lessons learnt, develop better working relationships and challenge our own perceptions in recruitment.
"The design of AI algorithms should involve the end users as well as people with a background in ethics so that they can think through the implications that these technologies will have."
- Anco Peeters
We can also ask why people felt deceived by Google's socially sophisticated digital assistant. Perhaps it made people realise how much trust we have placed unawares in the designers of such technologies.
"I think it's really important we have these conversations even though they're uncomfortable," Dr Wiblen says. "When people feel uncomfortable, we can ask, what made you feel that way? And rather than questioning whether technology is good or bad, we can ask is it convenient or efficient? Is it ethical or responsible?
"Humans can enact agency over how we design and use technology. Together we should think about what we want the world to look like."
Mr Peeters agrees: "The design of AI algorithms should be more democratic. It should involve the end users as well as people with a certain background in ethics so that they can think through the implications that these technologies will have.
"We must be wary that we now have these addictive algorithms that are designed to push our buttons in ways that we might be vulnerable to."
"We don't transact with companies and organisations. We engage with other human beings who have designed that organisation or told us what it stands for." - Dr Sharna Wiblen. Photo: Paul Jones
We can also learn from looking at how we engage with technology. There is, for example, for how children's interaction with digital assistants, be it bossy or lazy and dependent, may affect their social or cognitive development.
Mr Peeters suggests considering how technology is influencing our behaviour rather than trying to decide what is right or wrong. "Virtue ethics is an approach that looks at how we can develop a good character, and whether our interactions with technology change the way we behave towards other humans.
"Our use of technology might steer us in a direction where we develop the wrong kind of character states - vices instead of virtues."
Yet we shouldn't be afraid of AI, he affirms, because it depends how we choose to use it. "AI allows us to investigate the true capacity of human cognition."
Rather than solving difficult problems or making our lives ever more efficient, the greatest benefit of artificial intelligence might just be what we learn about ourselves in the process.