E28 The Future of Trust
Imagine a global event that rocks the world in a way that fundamentally changes how we trust each other, our governments, companies, and other institutions. That is the premise of the Trust 2030 report authored by Álvaro Marquez and his team at Method in partnership with Hitachi. As the COVID-19 pandemic continues to unfold, we may be living through the very event that could so significantly alter our concepts of trust.
In this interview, Álvaro joins host Don MacPherson as they discuss Trust 2030 and its three potential scenarios for life over the next decade. They also discuss deepfakes, fake news, social media, and the appeal of outrage culture.
Season Three of the podcast is dedicated to exploring the future and how life is sure to change over the next decade. This episode provides insight into the future of trust and guidance for how to better understand that future.
A true global citizen who speaks five languages, Álvaro Marquez is Chief Design Officer at Method, where he led a team of researchers for the Trust 2030 Project in partnership with Hitachi.
Don MacPherson:
This interview was recorded near Frankfurt, Germany on January 25th, 2020, just days after the first confirmed case of the coronavirus was identified in Europe. In the two months since, the world has changed in ways that seem unimaginable. Much of this interview focuses on a report called Trust 2030, which outlines three potential scenarios for how the world will evolve over the current decade, and how trust will be altered during that time. The scenarios are triggered by a hypothetical event that rocks our faith in governments, institutions, companies, and individuals. It is very possible that the COVID-19 pandemic we are living through right now is that event and will fundamentally reshape the way we trust in all aspects of our lives. Please keep the timing of this interview in mind, as well as how this pandemic is changing our ability to trust.
Fake news, data leaks, volatile cryptocurrency, deep fakes. Today's world is filled with technology that challenges our understanding of what is real and where we can place our trust. Hello, this is Don MacPherson, your host of 12 Geniuses. For a quarter-century, I've been helping organizations and the leaders who run them improve performance. Now I travel the world to interview geniuses about the trends shaping the way we live and work. In a time where we are more aware than ever, that lies and misleading information surround us, we must ask, “Who can we trust? How do we gain and maintain trust?” Today's guest is Álvaro Marquez, Chief Design Officer for Method. Álvaro is an expert in how we design systems of trust.
He and his team at Method conducted the Trust 2030 project in partnership with Hitachi. Trust 2030 takes a deep dive into the foundations of trust and how trust will look in the future.
Álvaro, welcome to 12 Geniuses.
Álvaro Marquez:
Thank you. Super happy to be here.
Don:
Can you describe the work that you do for a living?
Álvaro:
Yes. So, I'm the Chief Design Officer — chief designer for Method, which is a product design consultancy. It was founded in 1999 in San Francisco, and we have two offices now. We have an office in New York and an office in London. And I lead a fantastic team of designers and technologies and business designers. And we help our clients innovate, basically launch products and services to market that are really meaningful and that are innovative, and that make things move.
Don:
What services does Method provide?
Álvaro:
It's primarily consultancy and primarily design services. So, anything from experience design to product design, to product strategy development and product engineering. Anything related with the product development life cycle, that's what we do.
Don:
Method embarked on a study called Trust 2030. You did this with Hitachi. Could you describe what Trust 2030 is?
Álvaro:
Trust 2030 is a project that we did together with Hitachi early last year, where we basically were tasked with helping Hitachi understand how is trust evolving and how is trust changing towards 2030? And how should we think about trust in the context of the digital society? Hitachi's a really big company. Hitachi will do anything from make your hair dryer or microwave to install a power plant and create the factories that we rely on for self-driving cars. So, the scale and scope of the remit is really big, and the Trust 2030 Project was a way to create a unifying vision for Hitachi to take internally and externally and create a point of view with regards to, this is what we should be thinking of when we think of trust.
We did the project in partnership with a team, the social innovation team, which is located in Tokyo. And some of the teams are working in London, some of the team members. We basically, we were looking at a feature set of scenarios in which, given a particular event in the world, accelerates different approaches or different understandings of trust and different directions. And we were like walking back, mapping back, what does it mean to create trust, to have trust, to be trusted in those different societies? And how would a company like Hitachi have to think of doing business and understand the world to be able to be relevant?
Don:
I want to talk about those future scenarios, but before we do that, could you talk about the research and how it was conducted? Because it's quite extensive, and there were multiple methods used in order to create Trust 2030.
Álvaro:
Yeah. So, the whole research piece took about three months, a good three months. If you're familiar with human-centered design and the devil diamond way of doing things, we conducted a significant amount of secondary research and desk research, which then we used to synthesize key trends and things that we were seeing happening in the world, which we in turn then use to create a set of hypothesis and working assumptions that we could use to inform further conversations with experts. So, it was a mix of secondary research, desk research, and primary research. And we wanted to make sure that the research was representative of the whole world, basically. So, not just particular to one country, one market, and one culture.
It basically was distributed pretty much in every continent. We were basically trying to understand, what is a model of trust that you can extrapolate to this different societies that still works, but basically the function, the forcing function changes? While the logic is the same, the result of that trust model changes, depending on the context.
Don:
What's the state of trust today?
Álvaro:
That's a really good question. I was discussing this with someone last week over the impact of deep fakes and synthetic imagery and fake news or propaganda, however you want to call it. I think it's really fundamental. We still don't know what we're looking at. What's interesting about deep fakes and fake news is not so much the artificial synthetic images or synthetic audio or pieces of miscommunication in their own right. But what's really fundamental is the significant shift that is creating on us not trusting evidence, written evidence, or recorded evidence, which is basically how we have been creating our identity ever since history, right? I mean, that's what history is all about.
The beginning of history counts as the moment that we start to record, to write things, to document what happens, and to create a narrative and to make sense of it. So, all of a sudden, when you cannot really trust evidence, you cannot trust what you see on television or what you hear, you cannot trust whether you've seen someone saying something because that might have been doctored or it's indistinguishable from reality. All of a sudden, it begs the question, how do we continue to create our identity and how do we continue to trust ourselves to be able to function as a species?
Don:
Well, the ability to doctor things has been around for decades, maybe even longer than that, but it's the democratization of the ability to doctor…
Álvaro:
Absolutely.
Don:
… that has changed, right? Because you've heard of governments manipulating photos and things like that, but you or I can do that.
Álvaro:
Yeah, in real time with our phone.
Don:
Yes. We can manipulate video and we can create audio that you didn't say or I didn't say, which is… That's very powerful, and then erosive toward trust.
Álvaro:
Yeah. So, it begs the question, right? How do we create those systems that place value in not just the documentation of those proofs, but actually an inherent system of ethical or moral codes behind things, right? Just bringing that back to the context of Hitachi, the beginning of the project was actually disarmingly simple. It was all about trying to understand the rating mechanisms of say, when you take an Uber or a Lyft, or when you go to an Airbnb and there's any service that you access, where you are supposed to rate someone, how many stars, how how good was the service and whatnot. And it was us trying to unpick; how are those trust systems constructed, and who's to say that something is a five star versus a one star? And what's the ethical implication of rating someone as good or bad.
While things might be factually true, they might be complete lies and fabricated stories, right? That was the beginning of, perhaps there's something here in trust that is bigger than just the mechanism of rating something that you just consume or that you just access. What is it? How do we trust each other? How do you leave a trace? What would happen if there's a major social event that accelerates these trends in different directions? What does the world look like? What do we consider trustworthy or not trustworthy? So, that's how it'll start.
Don:
Well, I think this is very fascinating, and just to share kind of a personal position, it's 2020 right now, and I am frustrated with being manipulated and being pandered, too, by individuals and by companies already, and the tools are only going to get stronger. So, I can imagine if there are other people who feel this way, the way that I do, there will be some sort of revolution, and I don't know exactly what that's going to be. And we'll get into that because I think you have the three scenarios that will outline that. When you think about the state of trust today, how would you compare it to periods over the last 50 years?
Álvaro:
Well, I can only think of it in a very simple way through the lens of my profession, which is that of product development and product design. I think, if you look at the early days of branding, of product branding, it was pretty much all about that, right? It wasn’t about putting a name in a logo, putting a stamp on top of a piece of soap or some other laundry detergent or some fast moment consuming goods that you could trust that it would be of the same quality of the last one you had before. It was about the productization and the manufacturing consistently of a certain set of products that you could consume and buy and purchase.
Inherently, trust, through the lens of product design, has been very closely related to the development of the mass market of whatever goods you buy for home. That has had a magnifying effect that has gone beyond just product and services, but actually, as the products and services that we continue to consume are more and more digital, that model is not just about the piece of soap, but about the promise of the experience of how will I feel when I use the soap. So, it's still branding, right? It's still about a promise delivered or not. What's different now than before is that the rest of the, let's say architectural pieces of society are also becoming digital. So, we are using the same interfaces to value, how good was my lift, right? But also, how good is the food in this restaurant and how trustworthy is this politician?
So, all of a sudden, you start to mix signals, and it's difficult to tell what's real and what's not real and what's important and what's not important. And I think that's what's significantly different than before. That back in the day when the government spoke, there was a very particular way of speaking and relating to the citizens, right? And so, the media was inherently trustworthy because there was a monopoly on who says what. That's not the case anymore. That's through Twitter or Facebook or Instagram or whatever. The government shares the same literal pipes as the brand that's trying to sell you a pair of sneakers or the influencer who's showing you how cool their life is. And so, it's creating this, this weird mirror of house of mirrors, like, what am I looking at really? What's behind the distortion? And to your point, you cannot help but feel that there's some kind of manipulation somewhere. You might not be able to spot it and put your finger on it, but there's something there that is kind of unsettling that makes you question, makes you doubt, is this really true? Is this for real?
Don:
In the Trust 2030 study, you identify the potential futures related to trust. Can you describe these three different societies or futures?
Álvaro:
The main premise of the futures of 2030 Project are based on one significant event in society that propels three different timelines, if you will. So, there's a key event, which in our working hypothesis is a massive leak of data that exposes some of the inner workings of how certain decisions are made in the society from politicians and brands and corporations that expose, it's almost like you see the man behind the curtain and the Wizard Oz. And then all of a sudden, you're exposed to the truth of what's behind it. So, basically, we take some of these assumptions and then these hypotheses and through the conversations with the subject matter experts in different geographies in the world. We take what's particularly unique about these traits of these societies and then we extrapolate them as far as we can to try to see, what would happen if this is driven to 100?
So, there's three different societies that emerge, or three relationships to trust that emerge from that key event. And then, one of these societies puts trust and trust making in absolute transparent and decentralized decision-making. A silly example is a politician who is broadcasting their life 24/7, 365 days, and explaining why they make the choices that they make. So, it's almost like some kind of big brother, but as opposed to just being entertainment, it’s just mapping for safety or for trust reasons why would someone do it? Another society is pretty much the opposite, which is highly centralized and curated, and it is consolidated by big corporations where everything is highly customized to yourself. And everything is made, particularly to your particular context, and unique, but there's no transparency as in, how is this decision being made?
How is this data being gathered, and how do I know that this is the actual pill that I need to take? That yes, it has my name, and it says, “Hey, Álvaro, this is your snack for after workout that you're going to take at 7:15 in the evening. And this is just what you need.” So, it all comes from the same one center of truth of gravity, which is fairly different to the previous society. The last one that we envisaged was a society which was about the power of the network; a network that is autonomous and distributed. So, trust is placed not in the individuals, not in transparency, not in a single unified view of the world, but actually in the fact that the network is what articulates trust, is that vehicle that actually enables trust to happen, if you will.
And there's a few other examples that I can give you there, but those are the starting points of the same key events propels different timelines and demonstrate different aspects of trust. The question that usually follows is, so, which one do you prefer? Or do you think that any of these is going to be happening? Or is there one more likely to take place than the other? And the answer is, I prefer none of them, and I prefer all of them. And what usually happens is that all of them manifest themselves in different ways and different parts of the planet from a governmental standpoint, even what we're hearing nowadays of that coronavirus in China. The democratization of information and where the source of information is coming from is particular, or one of these societies is particularly telling of a centralized society that there's basically one single source of truth.
Don:
You have the three scenarios: the first is decentralized and transparent; the second is centralized and curated; the third is distributed and autonomous. You pointed to China, and it would seem like they would be centralized and curated.
Álvaro:
Yeah. Could be, yeah.
Don:
Or closest to that or maybe North Korea, I'm not sure.
Álvaro:
At least at the higher level when it comes to central government; the structuring body, yes.
Don:
But is there an example of maybe a country or a community that is decentralized and transparent that you can think of?
Álvaro:
Yes, there is. But before jumping into another society, you could also think of a centralized and curated network, not only as a country, as a government, but also as a corporation like Amazon or Facebook.
Don:
Yes. That's what I was thinking about, too, is like, in the centralized and curated example, you are basically marrying a company.
Álvaro:
Yeah.
Don:
And this company is going to make choices for you.
Álvaro:
On your behalf.
Don:
On your behalf based on data. And it's going to be optimized for you, but it's really taking choice out of this scenario.
Álvaro:
It is.
Don:
I took your study. I took the survey. I wanted a distributed and autonomous society of the three, as I looked at these three, I came up with centralized and curated, and I'm very disappointed in myself.
Álvaro:
Yeah. It's a funny one, isn't it? When you actually do it. That what you assume you're going to get is actually not what comes out. This is exactly the point that you cannot really force a particular outcome on a bigger scheme of things, but you're making decisions on a day to day that actually take you inevitably in a particular direction. And this is what's so difficult to quantify, and this is why it's impossible to foresee a particular future. And this is some of the work we do with our clients where the ability to envision different scenarios, it's critical, not in order to anticipate those scenarios, but actually to build a muscle, to think in different context and in different ways and different dimensions so you can actually react and adapt in an agile way, in a swift way, as opposed to betting everything on a particular way of doing things.
Don:
One of the things that's fascinating is that, I heard this again recently, is that 90% of the data that have ever been created have been created in the last two years. And this is something that my former business partner used to talk about when he would speak. And he was saying this in 2014, 2015.
Álvaro:
It's still true today?
Don:
It's still true. We were blown away by that in 2014 or 2015, and it's still true. And so, the amount of data grows exponentially and will continue to grow. So, I think it's something like 127 new devices are created or are joining the internet of things every second. It's like 11 million a day.
Álvaro:
Nice.
Don:
So, you can start to imagine, okay, well, the amount of data will not slow down. Data creation will not slow down. It will continue. And so, when you think about that, you can think about, “Oh, well, maybe these scenarios are not so far-fetched.”
Álvaro:
And that is something that tends to happen to us. And it's really funny to witness when we work with clients to create these radical scenarios, future scenarios, 2020, 2030, 2050, and then you map out this completely ridiculous nonsense, apparent nonsense, apparently ridiculous nonsense, and then six months in, it turns out that that thing becomes true. So, it really tells something about our ability to quantify the amount of change and the speed of change. I think, I don't remember who that is, it doesn't matter, but someone says something like, “We are really good at underestimating how much effort it takes to do something, and we are really bad at overestimating how quickly things change,” or something like that.
Don:
Well, the quote that I've often heard is we overestimate what we can get done in a year; we underestimate what happens in 10 years.
Álvaro:
Exactly. Something like that. Yeah.
Don:
I don't know who originally said that. The first person I heard say it is Eric Schmidt from Google, when he was at Google.
Álvaro:
From Google. So, it's definitely coming from a technological background.
Don:
For sure, and others probably have said it before, but that is so true.
Álvaro:
Yeah. For me, what begs the question is how does having access to more data change the way we use data? How does it allow us to ask questions we didn't even know we had after the fact? Because today, the way we create data is basically, there's two ways; either you just hoard and harvest and try to make sense out of it later, and then you realize that it's impossible. It's like looking for a needle in a haystack. Or you have an hypothesis and then you set some markers and then start looking for indicators that that hypothesis might be true. When it comes to data creation, you need to think of cleaning up the data, sorting out the data, tagging the data, processing the data, visualizing the data, accessing the data.
So, all of a sudden, it's a soft infrastructure that you need to put in place. So, it's not just something that you do one thing and it's done and it's there, and then you're finished. It's almost like you need to set up new highways, but for data. And those highways need to be accessible, and they need to be really pretty well done and they need to be… has to be road signs and there needs to be some directions. I suspect that a lot of the most innovative product and service solutions that we've been doing have been related to data and to that ability from companies to create soft strategies, if you will. They rely less on the, what does the data say? But assuming that this data will be available by a certain point.
Don:
I want to get back to the three different scenarios or societies that you are predicting for the Trust 2030 report. The first one, decentralized and transparent. In the report, you talked about things like connected ID cards that verify expertise; on-demand medical kits; mobile device with notifications. What's the upside and downside, benefits and liabilities, of a decentralized and transparent society?
Álvaro:
I don't think there is clear straight up upside or downside. I think they just represent a model to deal with trust and how trust is created and how trust is measured and how trust is documented. Going back to what I was hearing earlier, some of the more indicative qualities of these societies are already present in the world around us today. They are just taken to the extreme to paint a different paradigm. The carbon footprint of that banana that comes in the label or the providence, or the human rights, is it a fair product, has it been harvested by people who were paid a fair wage, and so on. When we were designing these scenarios, that seemed like a bit of a far fetch, like unreasonable thing.
It's almost like impractical. You cannot really trace every single banana, right? Every single product. But yeah, guess what? Fast forward a few years on, and that is almost to be expected, I find myself making choices in the supermarket based on the country of origin of certain products like tomatoes, where they come from. I don't really need to buy tomato that come from that far away. I just really need local produce because I know the choices that you're making, and I know how that product and those choices impact society. So, the point of that decentralized and transparent society is basically devolving the trust making mechanism to every single micro-interaction to every single person at every point in their life, as opposed to the centralized and curated where there's a big company that, on your behalf, assuming on your best interest, makes every single choice for you beforehand, ahead of your choices.
Again, there's no necessarily a clear upside or benefit to one or the other. It's just different ways of creating that model. And I think the purpose of these three societies is to look at ourselves straight in the face and to face, what would life look like if this was to become the norm? How would we behave and how would we relate with each other? I kind of prefer this society as well, decentralized and transparent.
Don:
You do? This is the one you prefer?
Álvaro:
I do. I do. I don't know. It feels more empowering somehow. It feels, bringing it back home, closer to the way I think we should be relating to each other, which is on a one-to-one interaction as opposed to trusting nebulous systems.
Don:
Yes. In some ways, like Netflix is already curating my content for me.
Álvaro:
Yeah, definitely. Definitely.
Don:
And they're right very often, but there's a part of me that is very uncomfortable with that because I love to explore. If all the data they ever had was the music I listened to up until I was 25, I would still be listening to Bob Seger and Led Zeppelin, which is fine. Nothing wrong with that. I still enjoy it, but I would never have been exposed to Miles Davis.
Álvaro:
Exactly.
Don:
I would never have been exposed to Pavarotti, and some of these other things. So, I love that exploratory ability.
Álvaro:
It definitely homes in a very interesting space and asks very articulate questions like, where's serendipity? Where is the ability to be surprised, to let things happen on their own right without having to be planned for and calculated and accounted for? When we make choices based on convenience or smooth interactions or something that is effortless and seamless, we're basically priming these kind of systems to evolve. And we are reinforcing that because we're taking our agency out of the question, and we are granting that ability to somebody else to make those choices on our behalf. Case in points, Netflix, because I know that you like this, let me give you more of that because I really think that you're going to enjoy that, or that podcast, and whatnot. With music and with literature and with food and whatnot.
I think it creates that view of yeah, everything is perfect. And in an apple world, is this world gardened where it's perfectly optimum, where I don't even have to break a sweat? Because everything just works and everything is stated to exactly what I need and how I feel every single point of the day. But what's the tradeoff, right? What are we losing when we get a perfectly smooth society? And this not only happens in the product world, right? We know, historically, whenever there's a big economic turmoil and depression, populous voices and civic society tend to rise because they're quick, easy answers that take away the uncertainty, and that promise order and a unified view of the world and no dissent, and everything will be smooth again. We know what happens when that actually goes into full power. So, yeah, it's not dissimilar.
Don:
We'll touch on this at the end, but where can people find this report?
Álvaro:
Well, it's on method.com, on our website, if you go to the work section or to the futures section, you will find their Future Trust. And there's an explanation of the project and a bit of a case study, plus you can access the report, and then you can go pretty deep in the content and understanding how these societies are constructed and how do we get to those societies, and why do they behave the way they do? And then, why do they exist?
Don:
About 40% of people who use social media don't trust social media or don't trust the information on social media. Do you see this skepticism of social media really changing the behaviors of the people who are using it?
Álvaro:
When the way you make decisions changes, what's right and what's wrong is no longer easy to tell. And when you optimize for a certain sort of behaviors, then you need to readjust how you create content, how you create preference, how you create news. Perhaps we would've never had the best quality journalism in the future if we wouldn't have gone through a patch of fake news and feeling that the damage that it can create. So, going back to the struggle and becoming a better version of yourself. Perhaps this is a way of learning because we never knew and we just are figuring it out as we go. It's just that it's really visible, right?
Don:
Yeah. It seems like we really have to be diligent. And the other thing that I think of is there's an attraction to the outrageous. And when I say outrageous, I don't mean outlandish, but I mean things that make you outraged.
Álvaro:
It's a powerful feeling.
Don:
It’s a powerful feeling. Yes. I think we have to have an awareness. Am I being manipulated? Am I the moth being drawn to the light here? And is there danger? Okay. That seems too enticing to make me outrage. I'm going to just turn it off, to your point.
Álvaro:
Step back and breathe.
Don:
Yeah. We talked about this a little bit earlier about, what are deep fakes?
Álvaro:
Deep fakes is the name that's given to a set of images or videos or audio pieces that have been artificially created. By that, I mean images in which you see somebody saying something that they didn't really say, somebody behaving in a way that they wouldn’t normally do. Some of the most notorious cases of deep fakes, as usual, started in the porn industry, where faces of famous people are swapped in the bodies of actors performing certain things that violate the integrity of that person. If that wasn't bad enough, you can think of it in the context of a politician declaring war of someone or saying outlandish, outrageous things that may have never happened in the past, but because of our ability to synthesize voice and modulate the message and to manipulate the image, it might look as if that person really said that.
The issue with the fakes is not so much that they exist, because you cannot really blame mathematics for being accurate, but the intent behind the use of deep fakes and how they are weaponized to manipulate public sentiment and public opinion, and how potentially it erodes our ability to trust written record and documents of our history that create meaning and create a sense of belonging and identity.
Don:
Who's creating them, and why?
Álvaro:
Who's creating them? Everyone. You can create them. Just download an app with your phone and you can create them yourself in real time. Something that was very costly and expensive, and that only Hollywood could do 20 years ago, that would take millions of investment in technology and skills, now you can do it in real time. And sometimes you do it just for laughs. Replace your face with a dinosaur face. Change your gender, change your age. Travel back in time, change your voice. Something as silly as that to mass manipulation campaigns of changing public sentiment before an election, for example.
Don:
Do you think it's possible to legislate against the use of these deep fakes, particularly when the stakes are so high, like an election, or like maybe the ousting of a CEO of a powerful corporation or something like that?
Álvaro:
Yeah. This is a great question. I'm not sure if it's possible, but it's absolutely necessary. We need to create some qualified rules of engagement for creating those.
Don:
We have libel and slander laws.
Álvaro:
They feel kind of out of place. They don't feel-
Don:
It's not the same?
Álvaro:
No, not really, because you might be able to manipulate someone's opinion about somebody else's without slandering them or without officially being-
Don:
No, no, I understand that, but I'm saying, could the legislation be comparable to like libel and slander?
Álvaro:
There definitely needs to be an extension because it's a new context, so it requires a new application and it needs to be reevaluated. I think it's really, really necessary. You need to have a code of conduct and a clear set of delineated application context when something is allowed or not allowed.
Don:
Are there any technologies or innovations being used or being developed that can help build trust?
Álvaro:
When we started the 2030 Project, some voices around the projects said, “Well, this project is more dramatic because I mean, we already know what the future of trust is. It's blockchain. That's what's going to be giving us the solution to trust.” Well, guess what? It turns out that there is no particular technology that is inherently an answer to the question. Technology, again, is just an enabler, is, in a way, the distributor of trust. But trust is not, per se, created with the tool. The tool is just an enabler. So, if we don't trust each other, no matter what pen we use to sign a contract, we will still not trust each other no matter what the contract and the pen is. So, this is the same, equivalent with technology. Technology is just a way to facilitate that trust being exchanged.
In the 1700s and 1600s with the introduction of paper notes, as paper money, the paper money that was said, “I grant that I will pay this amount of money to that person, to the holder of this.” And you could trust that this paper money had value. Our money nowadays is digital. It still holds the same value, but there's the figure of the central bank and the government who says, “Yes, I will pay this money to that person even if the money is digital.” If you take away the trust in the government or the central bank because of manipulation, all of a sudden, the value of that paper, it's not quite the same. So, what we are looking at is actually our ability to trust each other and to trust the institutions that allow us to trust each other and to trust ourselves.
I think that's the bigger question. There is no particular technology in itself that would solve that problem, but it's our ability to commit to using technology in a certain way and not in another way. What create that trust? Another very simple example is nuclear technology, right? The same technology that we used to conduct X-rays. We trust each other that you’re going to look through me to help me feel better. It can also be used in the very destructive way. And we commit to not use that technology in that way. So, from my point of view, we're facing a similar dilemma. You can use technology to create deep fakes that are misleading and manipulative, or you can use them to recreate a memory that you lost the photo from that birthday party of your kid when she was three years old.
But because you have photos from the event right before and after, you're able to reconstruct. And that is a positive thing, right? I think we still need to think through what do we want to use this for? We haven't stopped and asked ourselves that question. We are rushing and plastering everything with the technology without thinking, how might this be used to hurt someone? How might this backfire? What happens if these gets out of hand or if somebody with bad intentions uses it against us?
Don:
Where can people learn more about you and about Method.
Álvaro:
Well, as usual, the easiest way is to go to the web and method.com. It's a good place. And on LinkedIn as well. I've got my profile there. I like to respond to people. I’d like to meet people like yourself, bright individuals, have super engaging conversations, and exchange notes about how we see things panning out in the world. So, that's the best…
Don:
Álvaro, fabulous conversation, wonderful conversation. And I appreciate your time, and thank you for being a genius.
Álvaro:
Thank you so much.
Don:
Thank you for listening to 12 Geniuses. Our next episode will be about the future of entrepreneurship with Gino Wickman. After launching his own successful entrepreneurial endeavor at the age of 21, Gino has dedicated his career to helping other entrepreneurs harness their full potential. Devin McGrath is our production assistant; Brian Bierbaum is our research and historical consultant; Toby, Tony, Jay, and the rest of the team at GL Productions in London make sure the sound and editing are phenomenal. To subscribe to 12 Geniuses, please go to 12geniuses.com. If someone you know would benefit from the information in this episode, please share it with them. Thanks for listening, and thank you for being a genius.