Being Human in the Age of Technology

A broad look of the current situation of AI and some of its implications, including for society and human nature were explored at the Bled Strategic Forum panel dealing with challenges related to the fast development of technology.

Mr Grobelnik, researcher at the Jožef Stefan Institute’s AI Laboratory, outlined the current state of AI as coming after two major leaps in the 1980s and 1990s and the last one seen in roughly the past five years. Computers can now see, hear and read, but they still cannot understand, meaning a lot of things can be done, but not as many as the current hype would lead you to believe, he said

Mr Martin Svik, Executive IT Architect at IBM Czech Republic, echoed the last point, saying there is quite a gap between what he reads in the news “and what we’re actually able to do”. Still, the technology can already be used in any industry – in agriculture, healthcare, for instance to help fertility clinics. However, he also raised the issue of reasoning. “Our approach is to do it as a second opinion. I hope that the embryologist will always be the one making the final decision and not machines.” The gap is still there and “I see that in almost any industry”.

Mr Iskren Krusteff, chairman of Global Entrepreneurship Monitor Bulgaria, also commented on the hype, arguing we still do no see many companies making significant profits through AI. He noted the need to bridge the gap between the ecosystems involved in developing and using AI, underlining his belief in an entrepreneurial approach. “Last year only 1% of data scientists were deployed in the business environment…We don’t have the right people with the right skill sets yet.”

Mr Jakob Hjortshøj, associate at the Ministry of Foreign Affairs, raised the geopolitical aspects of AI, in particular China’s plan to leapfrog past the western powers via giant investments in AI. He also pointed out that technology is interfering in all aspects of society and that business, non-state actors, as the main facilitator of AI, are entering politics, one example being Cambridge Analytica. Denmark is aware of these realities and is pursuing diplomatic links with these non-state actors via its tech ambassador approach.

Job loss as a result of AI was touched on in the discussion, with Hjortshøj arguing it needed to be prepared for before it happens, especially through changes in the education system. Krusteff is not so worried, arguing the disruption will be gradual and that those ready to adapt will not have problems. “The next job is whatever you come up with,” he said. Grobelnik added that things are very difficult to predict, with projections speaking of 20%-80% job loss, “which translates into ‘we don’t know’”. His prediction is that society will adapt in about 5 to 10 years, but he added that he expects another major technological leap in about 10 to 15 year, maybe heavier than the last.

In the debate that had moved away from the direction indicated by the panel title, an audience member raised the point that being able to make exceptions to rules is one of the central features of human nature. He wondered “whether we are ready to divide ourselves from this nature … and become as rigorous as so many machines and things around us today”.

While Mr Krusteff responded by arguing a future symbiotic relationship with AI can be a natural think too, the panel’s moderator Ms Katja Geršak, Editor of Bled Strategic Times, noted that humankind has failed to match the fast technological development with emotional intelligence development. Mr Krusteff agreed, arguing this is where his main concern regarding the future implications of AI actually lies – a dystopia where AI is for instance not ready to tolerate human flaws.

The importance of discussing ethics along with AI – who gets to determine the values that AI follows – was also underlined by other participants, but it was also confirmed that these debates are still in very preliminary stages. In the tech community “nobody is defining values, it is a competition of who can do more, nobody is thinking about the consequences,” said Mr Grobelnik, who had also spoken of the vast societal hacking, meaning manipulation, threats entailed in large data and AI.

back to top