In my first Blog post, I did my best to inform you on what my topic was. I discussed the possible dangers in how technology has/will advance, the theories behind these dangers, and a proposal on how to delay/stop it. I looked at opinions from some of the world’s leading minds such as Stephen Hawking, Bill Gates, and Elon Musk. I looked into the way computers process information and how they could be integrated into technologies such as smart eye contacts. Since that post, I’ve sent out a survey with 41 responses (so far) and conducted an interview with SLA’s own Mr. Kamal. The interview was recorded on video and is planned to be integrated into a longer video in the future.
The Survey showed me a few things. While most people have computer technology heavily integrated into their lives, few of them feel an emotional connection to it. In my responses, I found a few incredibly in depth answers from people I could tell cared a lot about the subject.
Alright, so that last one was an example of something different. The mass amount of people who made jokes. I was not at all discouraged by these gags, though. They helped in showing me the number of people who didn’t take the subject seriously, or care enough to give a serious answer. This is important. Instead of directly asking about how much people cared, I got a more in depth view of how they felt about it. People aren’t worried about robots. “This is silly. This revolutionary walking robot looks like Uncle Jerald at 2 AM. How could that ever be a danger to us?” This argument is one that reflects how humans are built to deal with most of their problems. Neglect them until they’re already in effect. However, I’m not claiming that robots are our biggest problem. In my interview with Mr. Kamal, I asked him if he was concerned about a computer capable of human traits such as emotion, he responded that “the true intelligence and true meaning comes from synthesizing data into meaning, and [computer scientists] are very, very far from that. So I don’t worry about that.” I then asked if he thought it would be possible to safely regulate what’s being created, to which he immediately dismissed “No, technology works best when it’s unencumbered and people can develop it and figure out the great uses for it. And it doesn’t matter even if you try to control it. It’s not going to be controllable anyway, so screw that,” adding that “ethicists, educators and politicians need to be smart about what kind of common-sense limits we put on that and how to help educate people about the healthy use about technology.” The thing that’s more likely to be a problem is the lack of privacy and social communication that come with new computer technologies. My initial plan was to set what would be the equivalent of a “Comics Code” on active computer development companies, but those few sentences essentially dismantled it. All we can really do is be cautious, careful and smart. For my next slate post I’ll have to re-think my overall plan and figure out the best way to make people care.