Predicting The Future of AI and Work with Futurist Sinead Bovell
Trends

Predicting The Future of AI and Work with Futurist Sinead Bovell

Sinead Bovell first rang the AI alarm bells three years ago in a now-viral piece in Vogue titled “I Am a Model and I Know That Artificial Intelligence Will Eventually Take My Job.” The model and futurist, who spends her days analyzing technology’s impact on our futures (especially as it intersects with work) is at the forefront of thoughtful AI discourse. Her stance on the matter is balanced—she’s not one to downplay the impacts of AI on our jobs and she recognizes its vast potential, but she’s also not sounding doomsday alarm bells. Her motto is that the best thing we can do for the future is to prepare for it. 

Here, Sinead Bovell and Girlboss' editorial director Liz get into the AI nitty gritty. 

Liz: So, to start this discussion, I wanted to ask you: Does ethical AI exist? What would that look like? 

Sinead: I think we’re in the process of carving that out and defining it. But when we use words like ethical or responsible, it also highlights, for whom? Whose version of ethics are we talking about? And who gets to decide what is responsible or ethical? There’s an inherent tension within those words themselves. Whose needs are we centering? And the more societal conversations that we’re having about AI, the more voices that we're bringing to the table. And I think that that's always a good thing. 

Liz: So who should get to define it? 

Sinead: As the public, we need to have a say. The private sector, government and regulation. It really needs to be an all-hands-on-deck situation. We’re really just in the very early days of the AI age and what it will look like. We need all sorts of community groups, non-profits and individuals coming to the table to share their perspectives. We need to make sure that there’s a call for all sorts of voices, and right now it seems like we’re only tuning into big tech, but there are other voices that we need to include.

Liz: It sounds to me like this could be the next big labor movement through collective action. Much like the union fights of the last centuries that gave us things like the five-day work week and worker protections. 

Sinead: We definitely need to have societal conversations about the workforce. Because what we know for sure is that AI is going to radically transform the world of work. What's not clear, and therefore what it calls into question, what does a social safety net look like, is how AI is going to transform the workforce? Even if we look at history, right? The invention of electricity in the late 1800s. At the end of the century, in the early 1900s, only three percent of American households, for example, had electricity. It took decades for it to actually transform society. So even in these early days, it's really hard to say how AI is going to transform and shape society.

But what is clear is that there's going to be a lot of disruption. So whether that means sponsored re-skilling, retraining. Whether that looks like some form of financial support as people are maybe transitioning or re-skilling. Conversations about things like universal basic income, I think we have to tread very lightly in that, because, who’s suggesting that? That matters. When it comes from some of the bigger tech companies, to me, that doesn't seem like a great solution. A world where we just let tech do everything and we don’t contribute. 

Liz: Some equivalencies that people draw when it comes to AI is, “Oh, it’s just like when the internet was invented, or social media.” Is that valid? 

Sinead: It’s valid because a significant number of jobs that people have today didn’t exist 30 years ago. Now every single company needs a social media manager, and a decade ago, not at all. Most industries that we interact with today have a layer of the internet. E-commerce, the creator economy, streaming. It’s hard to say when and how much AI will impact. It hasn’t yet been adopted at scale, and that’s where we’ll see the system-wide changes. 

Liz: Something I’m seeing, and I don’t know if you notice this in the rooms you’re in, is this kind of us-versus-them situation where tech companies are all for it, and who are saying “adapt or die” and then there’s the other side, which is the government or individuals who are sounding alarms and trying to pump the breaks.

Sinead: I’ve noticed more the confusion that society has while companies are moving full-steam ahead while also talking about the inherent harm and dangers of the technology. It’s made people feel very uneasy and uncertain.

Liz: What’s the way out of that?

Sinead: It’s a big conflict of interest when the only players that can really evaluate these technologies are the companies themselves. You do have people doing AI safety things, but largely from the sidelines, or they’re an-in house group but then you’re still compromised. We need independent researchers who can evaluate these tools and put in guardrails ahead of them being deployed. So there’s not this whiplash when a tech company goes live with something and we are all scrambling to pick up the pieces. 

Liz: I want to further define what responsible AI is, and some of those bare-minimum standards that we have to clear.

Sinead: Is it being built in a way that’s fair and interpretable, is privacy being considered? Whose data is being collected. Do these things work as they are supposed to work. We sometimes put this blind trust in AI and we treat it like it’s magic. But if AI has an error, that can be really consequential in the long run. Is it discriminating towards certain groups? What biases are present? Who is in the coding room? And then there’s transparency and understanding how the system works and understanding when you’re interacting with AI generated media or systems. If AI is going to be the thing that we layer other things on top of like healthcare, education and immigration, these systems need to be secure and we can’t verify that if there’s no external audit system. And accountability is really important. We’re in this strange time, what happens when AI makes a mistake, who’s accountable? When ChatGPT makes an error, or harms someone, we need to make sure there’s a pathway to recover any losses. And it guides how companies build things. When companies think there’s no accountability, that's when companies move faster, and break things in the process. 

Liz: And the devil’s advocacy argument to that goes something like, “Oh tech moves fast, they’re innovators and disruptors!” They’re looking at bottom line and efficiency, I don’t think they’re going to want to slow down and say, “Hey let’s do this really responsibly.” For many, it would be naive to expect them to do that.

Sinead: Here’s where I would push back that there is tension between innovation and time. I think that innovation can continue to happen, but the deployment doesn’t have to be immediate. You can continue to create tools but you don’t have to put things out into the real world that don’t have any guardrails and treat society as a guinea pig. I think you can be pro innovation and also pro safety and guardrails. It’s like pharmaceuticals: innovate, solve all the things, but you can’t suddenly release that medication in the market. There’s precedent for this. 

Liz: In the US, worker protections are already not great, especially for gig workers. I think that adds another layer of struggle in getting these social safety nets in place, but AI might add more urgency to this. We already have so many things to protect workers from, this is just the next one.

Sinead: So we need to have quite an important societal conversation about what work looks like going forward and what worker protections look like. And that can't be a conversation that's led by the people building the innovations. It can't be a consultation. It's totally led by the government. We need everyone in these rooms, advocating for a future we want to be part of. 

Liz: I wanted to ask you this, personally as you approach this work with AI, are you approaching this from a pessimistic or optimistic lens? Where are you on that scale? 

Sinead: I’m a futurist. So I don’t subscribe to pessimism or optimism by default. I really try to follow the data as objectively as possible. However, I firmly believe that the optimistic scenarios are possible. I wouldn’t be able to do what I did if I only thought that we’re headed for a dystopian future. I find that incredibly motivating. 

Liz: Let’s put that optimist hat on for a moment then. Optimistically speaking, what are some of the positive examples of AI’s impact on our work? 

Sinead: Picture being able to stream artificial intelligence the way you stream the internet. Being able to approach a problem or a task and point to artificial intelligence to help you. That’s hard to even imagine right now. And when you think of the layers of work that happened on top of the internet, all of the apps, Uber, Google, all of the companies that have blown up as a result of the internet and data and smartphones. Imagine what's gonna be possible when we build things on top of a layer of artificial intelligence. I think it’s going to be really, really exciting. 

There’s a lot of tension and uncertainty in what's happening in the world of AI and creativity. And I had a professor tell me once, “A lot of people are creative, but not everyone can draw.” And that to me was a lightbulb moment, like the role AI will play in kinda shaping lanes of creativity in the workforce. There’s a skill gap that could be closed with artificial intelligence, and you really can kind of leapfrog over some of the things you don't have. Maybe that's the ability to paint or write a script. 

Liz: My counterargument to that, though, is if there is someone who's superhumanly talented and a painter, might they come to resent the people who are maybe getting opportunities by skipping a step or lacking those innate talents? That's a very hypothetical scenario, obviously, but I think as a creative myself, we could get a little salty about it. I was in an AI workshop during Milan Design Week and we had AI generate all of these wacky furniture concepts, like a greenhouse that's also a barbecue that's also a shower. And, sure, AI can generate what that looks like, but no actual designer is going to want to touch that. 

Sinead: But creatives can adopt these tools for themselves, and they will be light years ahead of the rest of us. I’m never going to be a threat to a trained graphic designer. I think it puts creative tools in the hands of many more and opens the doors for many more. And also, the definition of creativity and what it means to be a creative is changing. We now have something called the creator economy that didn’t exist before. 

I would encourage companies to lean away from the hype and lean more into long-term trends. Because if you lean into something that is trending in the moment, you are likely to make mistakes. Case in point, we had Snapchat release their AI chatbot a few weeks ago, and I understand the premise of it. But I think it was premature and there were safety tests where there were all sorts of red flags, especially with young users. 

Liz: As a futurist, what do you get asked the most when it comes to AI?

Sinead: People ask me, “How do I keep up?” and the simple advice is to just do a simple Google search. Even if it's just once a month. How is artificial intelligence changing my field? 

Maybe in five years, 40 percent of my role will look quite different. How do I want to fill that 40 percent? Are there new skills I should learn as a result, or are there areas I should pivot in? Just doing a quick monthly Google search can transform your outlook, and if you have any anxiety about AI, it can help with that too. 

Liz: And that approach definitely applies to how we should be thinking about our careers anyway. I certainly don’t have any doubts that my job will not exist in the same way for the rest of my career. I’m already doing some of that self-checking in and being aware of what I bring and staying relevant. I’m not perfect at it but it’s something I have to consciously accept in order to not feel like I’m falling behind. And I think a lot of people right now, when it comes to AI, are like, “How much should I care about this?” or, “How long can I go before I have to start thinking about this?” And I think now we’re hitting a point where we don’t necessarily have to embrace it, but we at least have to become aware of it. 

Sinead: We’re so much better off preparing for the future than trying to dodge and avoid it.

12 AI Tools To Make Your Job Easier
“I’m Worried That AI Is Coming for My Job. Now What?”
10 Women on How They Use AI at Work