Digital Futures – Hillary Rodham Clinton Global Summit Event

The opportunity to listen to experts discuss subjects close to your heart doesn’t come around often. Luckily, that’s exactly what the Global Challenges Summit provided.

How do we address issues with accessing technology? How do we heal fractures in global digital innovation? How do we use Artificial Intelligence (AI) for social innovation and improvement? These are just three of the big questions addressed during the Digital Futures discussion by the group of experts best equipped to answer them.

Dafna Feinholz from UNESCO, Vint Cerf, Vice-President of Google, Baroness Martha Lane Fox, founder of lastminute.com, and Chris Ganje, CEO of Amplyfi, made up the panel which was chaired by Professor Matt Jones from our very own Swansea University.

With just over an hour and a lot of discussion to be had about what the future of technology may look like, it was interesting to hear Vint speak about what the intentions of the internet were when it was originally created. As a founding father of the internet, I believe him when he says that the internet was never designed to exist in its current fragmented form. The internet is a mirror of society, and nobody’s too thrilled about how it looks right now. As we can see in countries such as Russia and China, the internet has its own microclimates which are isolated from the global internet. The fractured nature is troubling, as it denies the ability to build a distributed data center and stops the free flow of information. The sharing of knowledge is powerful, and we lose this through fragmentation.

And this is certainly the case when we consider the fragmentation that comes with digital poverty, which, according to the panel, is one of the biggest obstacles for many populations today. It seems that the issue has affected children in particular, as those without access to technologies have struggled with education, which the pandemic helped to highlight. Although it will take time to close this gap, access to basic technology is a human right. To overcome these issues, the panel suggested sharing data and resources, as well as providing further funding. And of course, we need to co-create and ensure that all groups, particularly women and BAME groups, are better represented in STEM to reap the benefits of diversity. After all, how do you solve a problem if you don’t know it exists? We need to have diversity of the owner, innovator, and creator to accomplish this.

To further expand on this point, smaller tech companies have a significant role to play through market disruption. There is space for them to create innovative solutions that run on the software of tech giants. However, it will require a radical approach to create effective results.

Still on the topic of innovation, the panel were asked to discuss their views on how AI could play a part in social innovation. To kick off the conversation, it was stated that machine learning and AI will flourish in areas where humans aren’t able to process substantial amounts of data, such as in the medical sector. By implementing models to complete such tasks, our resources would be increasingly freed up, thus reducing pressure on services.

Interestingly, this was counteracted slightly by the idea that we should not dismiss the ways in which AI can fail us and its limitations. It was even argued that we should look to predict ways that machine learning could fail to ensure that these technologies are safe, secure and reliable, in order to protect the

public. However, as noted by the panel, social innovation doesn’t have to and shouldn’t rely solely on technology. Real change comes by having the right mindset, taking risks, and, where necessary, disconnecting from technology altogether.

So, what about the ethical implications of AI?

Well to sum it up in two words, ‘transparency’ and ‘accountability’ come to mind.

We need to be completely transparent when it comes to corporate responsibility which includes being open with who is creating technology and the process in which this is being done. Without transparency, the tools may as well be useless.

In terms of accountability, thought-provoking points were raised, such as the need to think critically about the technology that is being built and ensuring that regulators are external. In the words of Chris from Amplyfi, allowing companies to regulate their own ethics is like “letting a kid grade his own exam”. This is something that must be addressed to prevent the ethics-washing that is currently going on in many big tech companies. We need accountability and we need it fast.

To conclude, the panel were asked to comment on what they deemed to be the light in the darkness. The panel agreed that there is an opportunity to teach children the power of critical thinking so they can ask questions such as “where did this information come from?” and “what does this technology do that I may not agree with?”. The future of ethical technology relies on this.

Leaving the webinar, I can’t say for certain that I had more answers than questions. Regardless, if I came away with anything, it’s that questions are vital. It’s only by asking questions and challenging the norm that we can arrive at innovative and powerful solutions. So, let’s keep asking.

Previous
Previous

Global Legal Tech Hub Day

Next
Next

COP26 & Technology for Our Planet - 11th November 2021