Download and watch the episode here:
This Week in Law 244
Denise Howell: Next up on this week in law, Professors James Miller and Stanley Lebolwitz join Evan Brown and me, and we tell the world that it had better let us buy our 3-d printed drones with bit coins, or we'll unleash Quentin Tarantino's war bots on them, on this week in law.
Netcasts you love from people you trust, this is TWiT! Bandwidth for This Week in Law is provided by Cachefly, at cachefly.com This is TWiL This Week in Law, episode 244, recorded Jan 31, 2014.
Deep Blue vs. the Universe
Denise: Hi folks, I'm Denise Howell and you're joining us for This Week in Law, we have an amazing panel, I hope you're in an economic mood because we have two economists on the show today to help us understand the latest issues at the intersections of technology and law and policy. Joining us from Smith College in Massachusetts is James Miller, Hello James!
James: Hello and thanks for having me on your show.
Denise: Great to have you. James is also an author and has written a book called “Singularity Rising”. You've also written on micro economics and game theory, these are really interesting topics too, I don't think we'll get to all of that but I think we'll definitely try and pick your brain on some of the really interesting stuff around the singularity, so thanks so much for joining us. Also joining us from the University of Texas at Dallas, is Stan Lebolwitz, he is a professor there and also the director of the Center for property rights and innovation. Hello Stan.
Stan: Hi, nice to be here.
Denise: Great to have you. Also joining us is Evan Brown from the infolog group in Chicago Illinois, Hello Evan
Evan: Hi Denise, hope your Friday is going well, it's great to be here, I'm looking forward to our conversation.
Denise: Me too, how great is it, Evan, because I know you are so interested, and we constantly come back to issues around this singularity, that we have an actual author on the topic with us today. So why don't we start there, because I think if we do, it will help frame the discussion for a lot of other things that are coming up in our run down as current in the news this week. So James, I haven't read your book yet, it's called “Singularity Rising”, it was authored in fall of 2012, or published then, is that right?
Denise: I have read a current review of it, and I hope you're appreciating this that I'm part of a book club that meets every month, they're calling for new books to read so I put this on the list, maybe we'll read it at book club. And if we do, we love to get author's on to help us understand things that may have left questions open, so maybe we can get you in on Skype for that too. It's just a fascinating topic, and of course for anyone who's not familiar with it, why don't we have you introduce the concept of the singularity and I know there's just too much to try and lay it all out in a nut-shell, but the extent you can, give us an overview of your book.
James: Sure. Our entire modern civilization, the reason we've done so much more than apes, is because we're a lot smarter than them. The idea of the singularity is that we're currently developing technologies that are going to create things as above us as intelligence as we are from apes. And I think there are multiple possible ways we can achieve singularity. One of the most likely is that someone is going to be able to create a computer program that's say as smart as a person. Once we've done this, because of Moore's law, once you have a computer program that's as smart as a person, in a few years, it will be able to think maybe twice as fast, and eventually it will be able to think thousands of times as fast. Then the smarter machine will be able to examine it's own hardware, and figure out ways of becoming even more intelligent. Another possible path to super human intelligence, is where we put things in our brain. We might figure out how to make brain implants that increase human intelligence. Or we could genetically engineer super intelligence into the next generation. Right now there is some serious research projects trying to find the genetic basis of intelligence. The Chinese, for example, have looked at the DNA of lots of kids who are really, really smart. Once we've identified the genes that create genius, we could use existing fertility technologies to create kids who are much smarter than today, and there is a possibility that within 10 years, babies will be born that are much smarter even than what Einstein was, and I define the singularity as a threshold in time in which vastly increased intelligence radically remakes civilization, and I think we're almost certainly headed for a singularity if we don't destroy ourselves first.
Denise: That's a great big IF, I suppose. We'll get to one of the reasons why that is in just a moment here. People might be wondering, ok, this is sort of a futurist topic and why are we talking about the singularity on a show that deals with current issues at the intersection of technology and law, and the answer to that question is, because I think we've sort of slouched toward the singularity day by day and innovation by innovation, so even though we're not there yet, as you look around yourself today, James, what do you see as the strongest signs that we will be there eventually?
James: One is Google just bought a company called Deep Mind, that was working on artificial intelligence in video games. Google paid I think about $400million for it, and they employed some of the top intelligence researchers. So this is a huge signal, not only that Google was interested in this, but if you were a really talented computer programer and you want to make a lot of money, going into the field of artificial intelligence is a great way to do it. Google is probably not planning for singularity, they're planning for the next 10 years, and they think that AI is going to be spectacularly profitable. The better AI gets, eventually we will have computers smart enough to give us singularity.
Denise: Are we even going to know when that happens? It seems like devices become more and more intelligent and helpful, at least they're attempting to be helpful, each and every day. Are we even going to know when something – are we going to be able to identify the singularity when it happens?
James: Probably because I think we're going to experience something called an intelligence explosion, it's really to make changes in software, so once computers are smart enough, they will be able to look at their own software, and I think very quickly be able to increase their intelligence. So think we'll go one day computers are about as smart as a typical human, and maybe literally 24 hours later they've achieved super intelligence. I think there could be a really sharp break. Unfortunately there's a chance that these super intelligences will decide just to kill us immediately, so we might just die, a whole civilization might just end, and I think that's likely to happen because we know from physics that there is a limited amount of free energy in the universe, and so a super intelligence that wanted to be really good at chess, would say “Hey, you know the molecules in people, I would rather use them in making chess hardware, so I think I'll kill everyone just to make a bunch of chess computers”.
Denise: Well that’s a fun though, as I send my kid off to chess class every week! I’ll just have to bear that in mind, that we could be heading toward the dystopian future where an AI just uses us all to make chess a better game. Evan, this is really your pet topic, even more than mine, do you have any questions for James before we move away from the broader topic of the singularity and the legal and ethical issues that it raises, and into some related stories?
Evan: Gosh, it’s really hard to isolate it into one issue, but as a motif that comes up a lot on this show and I think that’s where we can span out and fruitfully have a conversation about this, is the way that the advance of technology will affect us at the social layer, and James that was a huge, a very colorful example of the dark side of it, Chess that a computer programs to be an excellent chess player, destroying the universe conceivably. I guess the general, less than it being a question, would be the general statement, a general notion, trying to lay a platform where we can jump off and discuss these things in more detail, is how these advances in technology, how if you want to characterize it as post-singularity, or trans-humanism, or if you want to even kind of adopt the 19th century, early 20th century concept of eugenics, that seems so un-politically correct, but there’s some ways of seeing how this could be the modern incarnation of that notion here. How is it going to affect us, and I’m not looking for an answer here, just laying a platform, but how is this going to affect us really at the social layer so that we can have a discussion about laws and the way norms should fit into all of this in a bunch of different areas. It just seems like it’s going to be inescapable that these advances in technologies if they actually do start affecting us in very intimate ways, machines going into our bodies, becoming part of our bodies, where it becomes indistinguishable what’s machine and what a human is, or where we are affected externally from all of these things. We just have the pre-curses of it now, Google Glass being kind of a good example of the augmented reality leading into some sort of transhumanism. I just want to have a conversation, break all those topics out, how this new status post-singularity will – how we should make laws and regulate ourselves to exist and thrive in that sort of environment.
James: One big issue is going to be our employment and welfare law, as we’re approaching singularity, we’re going to get machines that can do more and more of what people do, and we might have vast segments of our population that just can’t be productively put to work, or people who won’t be able to earn more than $5 an hour in a free market. So we’re going to have to decide what to do with them and I don’t know a good answer to that. I think we might end up with a society where people basically play video games and do art and have fun and are supported by the small number of people who have jobs and by capital holders. It’s going to be a very difficult issue to deal with.
Evan: I’ve seen this idea where that could happen, and then there’s also this amelioration of suffering that could occur, particularly in the third world, and did you describe it as somehow eliminating the – I don’t remember how you articulated it, but eliminating the cognitive… what was that whole point?
James: Yeah, eliminating the cognitive --- people in third world countries really have horrible education systems. Even when you know you build a school in India, it doesn’t mean the teacher will show up, or that she even knows what she’s talking about, and I think that cheap computers might hold the potentials of radically increasing education opportunities in third world countries. You could see someone like Bill Gates coming up with something like a $5 educational computer and just dropping them all over Africa. That could significantly, if given to children, that might be able to significantly raise IQ’s in Africa, which would do a lot for their economical development.
Evan: How do we reconcile – those seem to be 2 ends of 1 spectrum or 2 sides of a coin – opposite notions of how we could be eliminating or reducing suffering because of this, and I’m over generalizing perhaps, but this whole notion that you were talking about, better education, in the first world where we would see everybody reduced to being couch potatoes with nothing better to do than to watch Jerry Springer all day. Are those two concepts in tension to one another or can they be reconciled, or are they apples and oranges…
James: Intelligence is a tool, at tool can be used for positive or negative effects. Even in the first world, you can know people who are couch potatoes; we can design it so almost everyone is better off. Most people don’t like their jobs and most people, I think, would be happy if they could have sort of a dignified unemployment, if society didn’t really need their efforts. There aren’t necessarily intention, losing your job wouldn’t be a bad thing if you could maintain the same or get a higher standard of living.
Denise: So what about the backlash? The bullet point under your book in describing it on Amazon, one of them is “Competition with billions of cheap AI’s drives human wages to almost nothing”. You mentioned a minute ago “while making investors rich”, what about the resentment factor there, it seems like we’re talking about making class differences even more pronounced and really getting to less utopia and more dystopia, or no?
James: It depends on how you look at it, I think. We could make everyone richer, but the people who own a lot of capital will gain proportionally more, maybe their income goes up by a factor of 100, while the average American is 3x richer. So it kind of depends on do you think inequality is bad, per se, or would you think it would be a better world if everyone got richer, but the rich got richer proportionally than the poor did. My personal view is I’m not concerned at all about inequality, I think it’s important to care a lot about the people who are the poorest, that’s different from worrying about the percent difference in wealth between the richest and the poorest.
Denise: Stan, do you want to offer any thoughts on this before we move on?
Stan: Yeah, I actually think this is sort of getting in to the realm of science fiction, and I actually think science fiction writers had talked about this even though I can’t give you at the moment because I haven’t looked at that literature for a long time, but it’s going to be a process that’s going to go and take a long time. In economics, we have a basic assumption, which is that we have scarcity. And it’s a scarcity that causes the problems of figuring out what to produce and how to allocate what to produce. And that’s what’s underlying all the assumptions in the discussion I’m hearing is “well, some people are going to have a lot more than others”. If there’s going to be an end result here, we have these machines here that are going to be so smart and so capable, they may be able to produce so much of everything that we think we want, that scarcity disappears. If scarcity disappears then everybody has as much of everything that they want, and you don’t worry about the 1% vs the 99% because nobody needs anymore material goods than they already have. It’s hard to imagine what a world would be like where there was no scarcity. It would probably turn out that there were probably things that people wanted that were scarce, like power. And that’s where things would – the competition between people would take place. But in this world that you’re envisioning, if it’s machine’s that become so smart, then there may be no scarcity and human beings can have everything they want materially but there’s really no point in our existence, except that maybe the machine’s like having us around the way we like having lots of wildlife around that we don’t consume, that don’t necessarily have any economic purpose anymore. It’s a very interesting issue that one could talk about, because what would be interesting is partly between that end point of having as much of everything as we want, and where we are now, there’s this question what happens while you’re getting there, as presumably, more and more people have their jobs replaced by machine’s and people are not particularly good, we talk about comparitive advantage as well, where everybody is better at something, relatively, than others, so there’s always something for you to do even if you’re not particularly not good at anything. If we have these machine’s that are better than us at everything, it’s not clear that there would be this comparative advantage for us anymore, and we might run into the situation where the owners of the machine’s, or the capitalist being used to make the machine’s, do get a disproportionately large share of what’s produced in the future. We have re-distribution in our country and most of the other leading countries, there would be a lot of political tension to keep that re-distribution up, maybe increase it, and politics wind up becoming an important roll, and of course we could get so concerned about the re-distribution that we will end up destroying everything. The Luddites, if I remember correctly, were against – they wanted to destroy the machines that were being used to help advance the well being of people, and you could have luddites who decide we have to destroy all the machines. That could all happen in one branch of an imagined world here. So there’s a lot of interesting topics, but I don’t think any of us are going to see what you’re calling singularity. You’re using the term the way I see fundamentalist using the term rapture, as if there’s like any moment where something is going to be different. It’s going to be a long process, it’s not going to be black and white where one day you’re not there and the next day you are. There are a lot of interesting topics, but none of us are going to see what you’re calling singularity in our lifetime, I’m pretty sure about that.
James: I don’t agree, why do you think artificial intelligence is so difficult? Would you agree if we could put a human brain on a computer chip? Today there was someone at Google who said “we have a computer that’s just as smart as a person”. And because of Moorse Law…
Stan: We don’t have a computer that’s just as smart as a person. We’re not going to have that in our lifetime.
Evan: Can we stop and put some definitions around this tubing? I’m no nero-scientest, I’m no computer scientist, so I’m probably the least qualified person in the western hemisphere to talk about this, but from reading curves vile and others, it seems to me that the way that we quantify human intelligence, is different in kind than the way can quantify machine intelligence, based on current machine and computer science and processing powers, the parallel processing vs serial processing, and a bunch of different factors in there, so it seems like we shouldn’t gloss over the idea of what it means to quantify intelligence so we can say that a machine is as smart as a human, or vice-verse. Is there anything to be said definitionally, before we start arguing and debating that point?
James: Sure. If a computer can do anything that a person can do, there’s a computer right now that can play chess better than we can, they can do statistical studies better than people…
Evan: What about enjoy a sunset.
James: You can’t define that, so that’s one of those, I don’t think that has any meaning. How can you falsify – if I were to say “computers can enjoy a sunset more than a person”, and you were to say “no”, what observations could we make that would determine who was right?
Evan: I agree, it’s a phenomenological problem, but I still know that I enjoy sunsets. Anyway, lets go on.
Denise: The point that that raises then, is it appropriate to put no meaning on the enjoyment of sunsets? Is there a way to think about this where you have to factor in that there are things that are not quantifiable?
James: I don’t think you need to. It might just come down to military ability. If computers can create weapons that can defeat any weapons that humans can create, they will be able to take over the planet if they want.
Evan: I didn’t want to disrupt where the two of you were having that conversation because I interjected that we should define some terms here, so if there’s any way we could just go back to that point because that was just really good, and I should have just popped some popcorn and set back and listened.
Stan: Computers clearly have gotten much better at doing specific tasks. I don’t know any computers that can do statistical analysis. They can run regressions that we tell them to run, but they don’t know what regressions they need to run. So I don’t see anything remotely close yet, and I don’t think there’s been a big push to produce a computer that’s capable of performing and analysis on any subject that’s more of a scientific subject. When I start seeing them capable of doing that, then we're maybe getting closer to having more of the science fiction versions of machines that can really sort of think. What we have now are computers that can do tasks extremely well.
James: Let me give you an example. Lets say that Amazon writes a computer program, and this computer program does a statistical analysis to determine the optimal price, and the optimal type of advertising to send to a customer. And this program runs without any human input after it was written, and it does a better job of figuring out what customers want than what a person would do. Would you say then that we’ve taken a big step towards singularity if Amazon creates that?
James: Are you saying that Amazon would not be able to create that in your lifetime?
Stan: Not until the computer can write the program. The person who is writing the program and the computer is looking at the data deciding what the pricing should be, but it’s the deciding on the algorithms that are being put in to the program. When you show me the computers that are writing the algorithms, then that is a different story.
James: I doubt there’s any economist who could write a computer program to do statistical analysis if you start with machine life.
Stan: Well no one can write – people are writing programs to do stuff like that, but they’re doing
James: They're using computers to write the programs. They’re using the compiler and the computer software, they’re using programs that people worked on a long, long time ago
Stan: They’re not writing programs.
James: But people don’t do that either. There’s no human being that could do that…
Stan: But there are things that computers can do that people can’t, but the things that people can do that computers can’t are what we’re talking about.
James: Can you give me an example? What’s something that you could say that would be an observation that would determine that you are possibly wrong?
Stan: I say to the computer “go and tell me, perform a statistical analysis, you figure out what techniques to use to tell me something – here’s something that I’ve worked on - whether or not file sharing decreases record sales.
James: Ok, so if there’s a way I could speak into Google and say “Does file sharing decrease record sales” and it was able to come up with an answer, you don’t think that could happen within our lifetime?
Stan: Well it might happen in our lifetime, but that’s still not quite what you’re talking about, what we’re talking about, where they’re better than we are. When I see that, I’ll believe we’re getting a lot closer. That doesn’t exist yet.
James: Um, we are pretty close, I think if a computer understands natural language, it would do a Google search and try to find papers. It wouldn’t do the intel research, but…
Stan: It wouldn't do what the people have done. It wouldn’t be creating anything.
James: It would be solving it in the most effective way.
Stan: If no one had done that before, it’s not clear that IT would know how to do it. Yet people figured out how to do it without having to go and see what somebody had done before had done the same thing.
James: But most people who do that, they use programs that other people have written. Most people don’t start from scratch.
Stan: The statistical package they use has been written by someone else, that’s right. Most economist use sData, I know the people who created it because they were at UCLA when I was a graduate student there. They spend a lot of time working on new components of data that allow people to use new techniques that weren’t there before statistically. But that’s the package that people use. By itself, sData can’t do anything. It needs someone to tell it what to do, and then it’s just a tool, it runs numbers and pops an answer out. It’s just running a statistical algorithm.
James: I mean intelligence is an optimization for us for sure, you’re trying to figure out how to solve a problem, and an intelligence agent would use anything available, if there were already statistical packages out there, it would use it.
Denise: As I listen to this, I think that the way humans create is not that different than the way and artificial intelligence would create, you would go out and find whatever information and resources that you can, and you would attempt to draw conclusions and attempt to find some solutions that haven’t been tried before. And I don’t see why, I guess I come down on James’ side on this, why an artificial intelligence couldn’t conceivably do that.
Stan: I don’t think it’s not – I think it can do it, I just don’t think it’s going to do it any time soon! By soon I mean within our lifetime.
Evan: Even taking into account the exponential rate at which technologies advance, I know I fall into that trap all the time, thinking we’ll advance in the next 10 years at the same rate we’ve advanced in the last 10 years. It seems like there’s that old – what is it – it goes all the way back to 1001 Arabian nights with the idea of one rice grain doubled every – there are lots of vivid illustrations of exponential growth, so I’m just throwing that out there.
Stan: Whether it will remain exponential, there have been questions about whether Moore’s law is going to continue to go, they’re having problems which I presume they will overcome in continuing to get to smaller and smaller densities of putting transistors on chips, but they have been running into some trouble and have gone to 3d to try to get around that, and I presume they will and intel seems to have had some success, but whether it will continue to grow exponentially or not, is something we know with certainty. There has been – for a long time, something called diminishing returns, and if diminishing returns were to set in here, then it will take a lot longer to get where you are going to go, than you are currently thinking. You’re assuming that you’re going to continue to have increasing returns, to have exponential growth – maybe, maybe not.
Evan: What would be some examples of some of those diminishing returns? Are you talking from a technical standpoint or could we be talking external – us realizing as a cultural society kind of level that this advancement is not what we want because of the negative side effects, the dark underbelly of the uses of these kind of technologies, going back to the chess computers destroying the planet.
Stan: Generally diminishing returns are not thought to be not something that is brought about by society, even though I guess one could construct the theory in which that plays a role. It’s just that as you apply more and more of some resource into production that resource winds up having less and less powerful, positive affect. So lets say that it turns out it starts becoming increasingly difficult to increase the capacity to put transistors on chips with greater density. Hypothetically, if that were to happen, you would start running into basically this problem where you’re putting more and more effort into creating the chips and getting less and less out of each new attempt to increase the productivity of the chip. If that were to take place then that’s going to slow everything down in your scenario, and the use of resources to try to make the chips denser would be where the diminishing returns is coming in. Whatever resources that you are using to increase density is having a diminishing impact on actually increasing the density. That would be an example how diminishing returns work. We have what they call law of diminishing returns, because we take a look at typical production of products. Let me give you a very simple example we used in the class room. Farmers want to grow a crop on the land. If you talk about increasing one input at a time, so you have a farm of a certain size, it’s one acre say. You start increasing the amount of fertilizer, you'll get more crops being grown, holding the water ring and holding the size of the plot constant. But as you put more and more fertilizer on, eventually you stop getting increased plants, and eventually if you put enough fertilizer on, you will crush everything. And then you won’t get any plants. That’s sort of like for a particular input, how you wind up having to have the case that you must achieve the point of diminishing marginal productivity. That’s why it was a law – we tend to think that that occurs everywhere. Admittedly, that is for physical production of most products. We also assume you can’t have increasing returns, but whether you can find a production of something, where increasing returns goes forever, it’s questionable. I don’t know that it’s something we can point to in the past, and even if you find some process where that’s always occurred for the last 100 years since the process has been around, that doesn’t mean it will necessarily continue. So we’ve had a pretty good run with computer chips, and that run might continue, and it might continue to grow exponentially, but that would make it quite different than most other processes of other things that have been built in the past.
Denise: Alright! Fascinating context for our discussion of the singularity, I think we will make “Singularity” our first MCLE pass phrase for this episode of This Week in Law. If you are a Lawyer and you would like to apply for continuing legal education credit in your jurisdiction, we have some information about that over at the TWiT wiki, at wiki.twit.tv and it may be that you’re in another profession that this applies to, we put these phrases in the show in case you have to verify for some oversight body that you actually watched or listened, so that’s our little present to you. Lets move the discussion to…
Stan: Can I say just one little thing?
Denise: Go ahead.
Stan: Even if you have decreasing returns, it doesn’t mean you won’t get to the point that the machines are so smart that they can do this. It merely affects the speed with which we’ll get there.
Denise: Right. And that seems like the only thing in which maybe you two are not precisely on the same page about, with respect to the singularity, it’s just a question of when, is that right?
Denise: So lets look at something that’s going on present day, that is sort of a real world experiment in how artificial intelligence might play into economies now, and going forward, and that is Bitcoin. We keep coming back to bitcoin on the show, I appreciate everyone who got in touch with me after last week’s show when I posted a question about bitcoin mining and exactly how that works, and between the show, having chatted with people and read some stuff, I’ve come to the conclusion that there is less pyramid scheme, although there is that component, and more study at home, as far as the mining process goes. That’s sort of beside the point that I want to get to, and that is this whole notion of software and algorithms and the economy and currency. Because there’s certainly no reason why you – obviously bitcoin has taken off and we’re not sure what’s going to happen with it, but with its current valuation, it’s certainly made people stand up and take notice, and it’s certainly having an economic impact. So against the backdrop of our singularity discussion, I wanted to pick both our guests brains on what they think crypto-curriences mean, and what is likely to happen in the future. James, we’ll start with you.
James: I think that people who get the most benefit out of bitcoin are criminals and crook government officials they’re the ones who really need a way of transferring cash without really having a way of linking the cash back to them, so I think it’s very likely that bitcoin either won’t become popular or it will be banned by our government, and one thing our government is very good at doing is cracking down on banking systems that it doesn’t like. So I don’t really see a very bright future for bitcoin, and I don't think it’s something that most individuals should be investing in either. The technology behind it is wonderful though.
Stan: This is actually fairly interesting, I don’t know if you’re familiar with the game “Hot Potato” or not, but that’s where kids throw something that you could re-think of as a hot potato – like a ball, back and forth until a bell rings and then whoever is left with it, looses. That’s what you are going to see, I think, with bitcoin. There was a history of private currencies. It’s not like we’ve had a world where we had government country currencies. In the 1800’s, the US had private currencies. But the private currencies were created by institutions like big banks, and the government did basically bring all of that to an end. James is right, I think the federal government will do it again. The difference between bitcoin and the private currencies of the 1800’s is when a big bank was saying “we have our own pieces of paper here where we promise to either convert it into either US dollars or gold, at some exchange rate, why should you take our paper which cost us virtually nothing to print, and give us goods for it”, which is what the person creating the currency gets called “Seniorage” from Mr. Senior from a long time ago. It’s great to be able to print the money because you get goods for it, you buy something, that's how people get the money generally. Then people are willing to take that paper because big banks with big institutions that have a lot of money, not as much money as the US government, they weren’t as powerful as the US government, but they were big, powerful institutions that people felt would not cheat and take their money and disappear overnight. Bitcoin doesn’t have that. There’s no reason whatsoever in the world to think it won’t be gone tomorrow. And anyone who has taken good money and bought bitcoin with it, will find themselves with the hot potato, where they loose and they wind up with nothing. What’s going on now as far as I can see, it’s almost a little bit of a bubble. Everyone who is familiar with bubbles, knows prices can’t go up forever on things, and that they will return to their “real value” eventually, and the real value of the bitcoin might be 0, because there’s nobody with any real assets to make good against the claims that you have with the bitcoin currency. So what you’re going to wind up with is people are seeing that it’s going up in value, and the chances is if you get in now, you will find and even bigger fool who will pay even more than you paid, that’s how the housing bubble, which is a small bubble, everyone was buying and selling houses, and it’s not like they rationally thought that the price of houses was going to go up forever at the rate they were going, but as long as you can get out without getting stuck, it’s a good deal. Everyone thinks they can get out in time and they don’t realize they’re going to be the ones left with the hot potato. So all the foreclosed houses in the housing bubble, were like the hot potato to some extent, and a lot of them were being sold by speculators, and the speculators a lot of them got burned, they made tons of money in advance, and I think that’s what’s going on with bitcoin. It’s very trendy, people like to get involved and get their name out there. It gives them a little bit of publicity for a seller of items, department stores say that “they’re now taking bitcoin”. But when enough people do it, you don't get any publicity for it, and I just don’t think there’s anything inerrant in the bitcoin phonon of why there should be value. When a big company, if Google wants to say we have a currency, or Microsoft or Goldman Sachs, or somebody who you think has the financial wherewithal to stand up – and I’m not sure Goldman Sachs is the best choice after what happened in 2008, but big companies might do it and say we’re putting our resources behind it. The government will still stop it, but until it’s stopped, at least it’s not completely artificial, bitcoin, I think, is just completely artificial.
James: I really do think bitcoin has real value for criminal enterprises, like the silk road, where you want to be able to pay for something anonymously, there could be an equilibrium in where people would agree to use bitcoins, and so I think bitcoin could survive if the governments don’t crack down on it.
Denise: We have two economist here, who can help me understand this better so I’m going to take advantage of that. Is that simply, James, because we’re not dealing with pieces of paper with numbers on them that have been issued by a government? It seems to me that cash can be pretty anonymous too, and I don’t understand why bitcoin is necessarily the “currency of choice” if you're engaged in criminal activity.
James: You could be an FBI agent, for example. Lets imagine that I do something, I say I'll mail you heroin in return for giving me money, but you could be an FBI agent, or you could turn informer. I don't want to meet with you because I could get arrested, and I don't want you to mail me something because then you've got to know my address. But with bitcoin, you could transfer your bitcoins to me in a way where if the NSA hasn't secretly beaten bitcoins encryption, that no one will ever find out. And you can't do that with cash, you certainly can't do that with checks or credit card. So it does fulfill a function for criminals or people trying to evade government, that there's no parallel with any other type of currency.
Denise: Do you second that Stan?
Stan: The one problem there is it has to be convertible into real cash, so you can't just have criminals use it, because if the criminals have their own currency but they can't use it to buy anything in the real world, it won't be worthwhile for them. They have to be able to convert their bitcoins to actual real cash that they can use elsewhere in the real economy. If it fails in the real economy, it's going to fail for the criminals as well.
Evan: You've just got to make sure none of your DNA gets on that package of heroin, that's all. The other thing is, is bitcoin kind of like “Tulip-mania” like the dutch did back in the 1600's where these transactions happened where the asset really has no intrinsic value, I mean, come on, you could crush a tulip bulb with a hammer, but people were buying and selling land based on this thing. Is there anything we've learned from that episode?
James: I thought it was, until I read a comment on a blog, I think it was marginal revolution, about how bitcoins are really useful for corrupt Chinese officials, and they really want to take currency out of China but it's difficult to do, and bitcoin is allowing them to do this. And there is so much corrupt money in China, if it really is performing that function, it has significant value to people.
Evan: I read a blog post a couple months ago about an Australian company “Tom Car” who is accepting bitcoin and it kind of dawned on me there's a use for this outside of silk road for this, and one of the reasons they talked about was that it avoids a lot of the transaction costs, exchange rates, processing fees, is that something that could be a permanent benefit, or would that eventually somehow be done away with?
James: I don't think that's worth it because bitcoin value fluctuates enough with the dollar, and if you're currently dealing with dollars, there's a currency risk you have to take on, and I have a feeling for almost everyone, that's going to swamp the transaction fees if they're going to deal with credit cards, so I don't think that would work.
Evan: Makes sense.
Stan: Everybody can use the dollar right now, and there are other countries that use the dollar as their currency, but most of the countries don't want to use a second countries currency, and that's because there's a real value to be able to create your own currency. So governments are going to have serious motivation to try to keep some other currency from dominating. They would try to stop bitcoin, but my guess is they won't really have to, it will die on it's own, that it's just a little fad that we have.
Denise: What about the notion – we have a couple of stories in the run down, people attempting to build legitimate businesses in the bitcoin economy, the state of New York is apparently considering a bit-license for businesses that operate primarily in bitcoin. James, would that solve the problem, the anonymity problem if you had to be licensed and had a degree of oversight from the state that went along with that?
James: I don't think that would help, there would be people who were legit and business men, and they would probably end up taking money from criminals in form of bitcoins, but everything the business man would do would be above board, so in a sense I think that would just be a way of laundering money from criminals or from crook government officials in other countries. If I'm licensed, there's no way for me to know that anybody who transfers bitcoin to me isn't a criminal.
Denise: When you guys think about Charlie Shrem being arrested, he's the CEO of Bitcoin Exchange, BitInstant, and he's being charged with money laundering, as you say, James, and there's no real indication, I haven't gotten too deep into the allegations and indictment against him, but it's certainly seems like the folks that set up BitInstant were attempting to have a legitimate business here.
James: It's hard, because if you have a true, legitimate business, there's really no need to use bitcoins unless you're sort of doing it for the novelty or advertising value.
Denise: Evan, what do you think of this guy?
Evan: I feel sorry for the guy, he's 24 and he's being arrested and indited for stuff that he did when he was 22, so to me there's juts more of a human element story, the human foible, the irresistible despite the fact of trying to be above board and run a legitimate enterprise here, there was just the irresistible temptation - unless the allegation's turn out to be true, this is America, innocent until proven guilty, and all that stuff. But there was this anonymity, this perceived – it was perceived to be a safe way of doing something that was a way to bring in a lot of income really quickly, so the thing that impresses me the most is the human aspect of it, how despite one's intentions or outward statements of trying to do it right, there's this ability of bitcoin to facilitate transactions like this that are unlawful.
Denise: Is there a copyright law parallel here, would sort of a Grokster like liability that's being imposed that even if an exchange is not itself engaged in illegal acts, if it knows that its users are, or it's willfully blind that it's users are – and now I'm munging up Grokster with the DMCA and it's case law, but you know where I'm going with this – what do you think?
Evan: Isn't bitcoin much more decentralized than what file sharing software was, or is, bitcoin is much more torrent than what Grokster, and certainly more than Napster, Napster was very centralized. So it would seem that with Grokster liability, for file sharing infringements on what the object of what the privayer of the device was, here it seems like with bitcoin, the idea of who the privayer is, is vapor. Who is it that you would try to pin liability on? And then the bit-torrent protocol is different than bit-torrent as a currency. So I don't know who you would try to pin it on to. I would invite anybody to address that.
James: I think if you're running a silk road kind of business, and it's pretty clear people are trading drugs using your website, you probably have liability for that.
Denise: I thought the bitcoin topic was interesting against the backdrop of our singularity discussion, just in the context of economic systems that sort of run via software and algorithm, and whether either of you thinks that something that we're going to see more of.. Stan?
Stan: What's the that in that sentence? What are we going to see more of?
Denise: Economic systems that are software driven, and money that comes into existence through computers working with one another to break codes and out more money into the system, as bitcoin does.
Stan: The only way that I see this idea of using computer mining or something, would be if the government wanted to be able to prove that it wasn't going to inflate it's currency. Which is not an issue right now, we don't have in most measure, currencies inflation problems. But if there were inflation problems, I could see them say taking currency they have and saying this is – instead of having the fed promise to limit growth, they could have them say we're going to use bitcoin kind of technology to limit how many new dollars we can print. That would be a way of trying to prove that they're not going to produce too many dollars. But that seems pretty far fetched to me as something it is going to happen, because the government can always break it's promise. We're going to see funds being transferred entirely electronically, that I believe is certainly going to happen, it will all be digital, it will all be over the internet, but the currencies aren't going to be created through bitcoin type mechanisms. Countries don't want that. They want, when they have their own currency, to have control over it, which is why they got rid of all the private currencies that existed in the 1800's. There's a vale to be able to create your own currencies, and countries want to have their own separate currencies, so Europe is not about to take the dollar, and the US is not about to take the euro. They could, and it would make transactions easier because everyone would be on the same currency, but countries aren't willing to give up the value in having their own currency to do that. I don't see the politics in that changing much. Transactions are going to be over the internet, they're going to be digital, but it's not going to be bitcoin type stuff though.
James: I think there is a way it could come about through hedge-funds. One way we might get a singularity is hedge-funds have an enormous amount of computing power and they keep creating computer programs that do faster and more efficient trades. It might be that hedge-funds of lots of different computers that are making trades with each other and the computers come up with their own kinds of currency, and it might be the kind of thing that most of us wouldn't even hear about, but through the shadow of the banking system, these currencies might emerge.
Denise: Ok, lets wrap up the bitcoin discussion with this story about the bitcoin ATM that was installed – was this in Canada, Evan? I don't know why I thought that's where this took place – in Vancouver, yes. At a coffee shop in Vancouver, the wold's first bitcoin ATM was put into operation. The way this thing works, it's made by a company called Robocoin, and it doesn't take debit or credit cards, it's cash only, so if you have cash and you want bitcoins – and you have to put a lot of cash in these days to get even one bitcoin out, you're going to get numbers that are addresses to link to bitcoins, so you can buy them that way. Or if you already have some, the machine can scan a QR on your mobile phone, and the machine will dispense you cash. So it does an exchange, and so the operators of this ATM put it in, and they noticed that there were a bunch of transactions all in a row that were voided, so they went in to see what was going on, and it turns out that some enterprising person set himself up with a sign next to the ATM saying “I'm going to give you a better exchange rate, so don't use this ATM, I'll do it for you”. So they undermined the whole purpose of the ATM. Now they've hired someone named Cameron Grey to sit next next to the ATM, who is earning $12 an hour to make sure that nobody comes in and tries to offer a competitive exchange rate in close proximity to the ATM. And also he's trying to encourage folks to get involved in other bitcoin related activities that the ATM owners are doing, something called coin trader. What do you think about this, Evan, first the guy with the sign and then the reaction of the legal ramifications of saying “hey, you can't undercut us competitively”.
Evan: The first conclusion I had about it, is the ATM sort of undermines the purpose of it being a bitcoin transaction to begin with, so it's inherently self contradictory because does this ATM have a little video camera on it so they can get your identity on your bitcoin transaction,and then follow it through the chain and what have you? But as far as this other part of it, Cameron Grey sitting next to it and offering other services, it reminded me a little bit of the law of trademarks and how there's this law that developed about competitive advertising, is it alright, for example, for Walgreen’s to put the Walgreen’s brand pain killer next to the Ibuprofen, like “walubrpfen” or whatever it's called, with the same kind of coloring on the shelves, we see that in online advertising with ads, especially the sponsored ads, lets sae you type in “Ford” and it shows you an ad for Chevy and things like that. Looking at it in a branding and marketing prospective, I don't think it's all that clear cut that what they were doing, whomever it was that was holding up the sign saying I'll give you a better exchange rate, I don't think at least under US law that it would be anything wrong with that. But then of course there are all these other things that come in, right of trespass, where can you be on the property, and all of that stuff. It's just one of those interesting little scenarios that gives us a few things to go out and talk about and go in several different directions in identifying issues with this.
Denise: James, Evan hit on something there that I think is kind of interesting, relating to our discussion a few minutes ago. If we wind up with intermediaries like ATM's that insert themselves into the bitcoin ecosystem, does that destroy the anonymity that encourages crime?
James: It actually encourages crime more, because then the criminals could using other agents, if I have a lot of bitcoins I got through selling heroin, I would transfer the bitcoins to someone else who then would get cash for them and get the cash to me. So it would be a of allowing criminals to launder their money, and to get dollars for their money.
Denise: Stan, any thoughts on the ATM?
Stan: No, I find the whole ATM thing slightly strange, I presume it's largely a publicity stunt, and it maybe they are throwing in essentially very high exchange rates. We don't normally talk about exchange rates when we 're talking about ATM's because the currency is all the same, you have the fee you have to pay to use the ATM. If you charge an exorbitantly high fee, it comes out, in this case as a bad exchange rate, and this guy in this case who was willing to sit next to the ATM and basically say I'm giving you a lower transaction fee to do your exchange. If there really was – if bitcoins actually were serious – the whole thing with currencies is people have to trust it. If people really do end up trusting bitcoins – I don't think they will, but if they do, then they will exchange them and there will be ATM's out there and they will compete with each other just as they do now, and it would exist. I don't think bitcoins will in fact be treated that way, but if they were, then you should see that the interaction between them and people would be just the our currency now between people and the currency, and it will be regulated. I'm sure that it takes a license in most places, I'm not sure but I suspect it would to be able to translate one currency to another, I'm not sure you could, without a license but I could be wrong. So to say I'm opening up a shop to allow people to transmit currency back and forth, only because it would be so easy to launder money that way, and the government doesn't like to allow people – you have to get license for almost everything these days, and I'm sure there must be one involved with that too.
James: I just want to say one sort of conspiracy theory, we of course don't know the real identity of the guy who created bitcoin, and I think there's a chance it was created by the US government, it's a scam an they're trying to get a bunch of criminals and terrorist to use it and some point they're going to say “Hey, guess what, we know everything that was done with bitcoins and we're going to arrest a whole bunch of people”. Probably not true, but there's about 10% chance I would estimate.
Denise: That’s certainly possible, but it’s inspired a whole bunch of copycat kind of currencies and that leads me to my last question about crypt-currencies, before we move on to War Machines and other dystopian topics. And that is, building on what you just said, Stan that people have to trust currencies. Pushing back on that a little, and wondering if they really do, do you think that people are investing or buying or trading things with Coinyawest coins. Because they trust it or because for some reason or another it’s appealing to them, it’s cool to them. Fred Wilson made a point, at a forum about Bitcoin. That If JP Morgan/ Chase came up with a Bitcoin, he said no one is going to build on top of that because it doesn’t have this sort of open distributed verifiable, we’re all in it together open source aspect to it, that Bitcoin has. So that there is different things about these currencies that appeal to different people that may have something to do with trust and may not. So what do you think about that, Stan?
Stan: For currency to be in existence for a long time and be taken seriously by a lot of people, they do have to trust. And I would give, a big bank a much greater chance of being successful with currency than an anonymous person, we don’t know who, creating a currency without anything to back it up. So, I thinks that’s exactly what wrong, there are people who like it. They like the way it is being created and it is appealing to them, and they’ll buy it j just for the fun of it. But that’s not what people use currency for. And eventually if it’s really going to last as a currency, it has to be trusted. The fact that it’s going up and down in price all the time, makes it very poor as a currency. Ok, it would have to really smooth out before you could seriously use it as a currency because people want to be able to hav stability, you’d hav to have future markets as we do now , we have future markets in all these currencies, So people who are using currencies to exchange with one and other in different countries. You have to know how many dollars you have to give up in a year to buy something in euros after it is delivered. Because you can go to the future markets and you can guarantee what the exchange rate is between dollars and euros in a year by going to the future markets. You’d have to have markets like that grow, develop around Bitcoin before it became reasonable to become possible to start using it as a currency for transactions. Now it’s something that appeals to speculative purposes because it’s going up and down and people will hope that it’s going up. And it appeals to other people because who think it’s cool. But the people who think it’s cool are very likely to lose whatever investments they are making into it because they’re doing it because it’s cool. That’s not a good investment strategy generally.
Denise: Well, that’s good advice right there. James, any thoughts on, I’m sort of thinking people might equate these crypto-currencies with their “Hello Kitty” credit cards out there. Everyone wants their own personal stamp on something. And I think that Stan answered me, I’m comparing apples and oranges, between a currency and a way of using an existing currency, but still it seems to me that at least some of the appeal of these things is because of whatever floats the particular user’s boat if they are not in it strictly to engage in criminal activities.
James: Yeah, I agree advertising works, Bitcoin has gotten a vast amount of free publicity but I also think we have to credit I’m not a computer scientist, but Bitcoin is genuinely clever. It really is the programming and the algorhythms behind it are quite sophisticated and it made a serious contribution. To what we understand about algorhythms, so that is part of the appeal it does represent a sort of technological advance for man, even if it doesn’t catch on.
Denis Howell: Alright, well, keeping on with our theme with technological advances, and the good, the bad and the ugly of them. Let’s talk about things that are becoming more intelligent, machines used in warfare. There’s a good article, I think it’s in The Verge, yes. by I’ll tell you in a second it’s loading. It’s by Adrian Jefferies. About a Darpa sponsored competition that happened in December in Miami about intelligent robots. It had teams of engineers from MIT, and Google, Lockheed Martin, all kinds of people there. Showing off what their bots could do. The article is about a sort of a conscious objector person who was there reminding us all what we don’t want to have happen, to have weapons that can decide on their own when to push the kill button. Going through the article, it suggests those sorts of things already exists, quoting from it “the recently tested X47B is one of the most advanced unmanned drones in the US military, it takes off, flies and lands on a carrier with minimal input from its remote pilot. But taking it into a more scary territory, the Harpy drone built by Israel and sold to other nations autonomously, flies to patrol area, circles it until it detects an enemy radar signal, and then fires at the source, meanwhile defense system like the US failinks, and Israeli Iron dome automatically shoot down incoming missiles which leave no time for human intervention. And one of the people interviewed for article had this great quote,” you don’t even have time to curse, before in order to deactivate something like this, what do we think about these war machines, James? Certainly, you were alluding to them in your discussion in the singularity and how are we not going manage to destroy ourselves?
James: well, I think the war machines are probably good. I would rather have computers make decisions on of to kill if computers are going to make more accurate decision. I can see drones flying over areas where there are terrorists and the drone have facial recognition software and they are deciding who to kill or not and they are more likely to kill the terrorist, and less likely to accidently kill a 10 year old or someone who just looks like the terrorist. So you know you can argue whether the US should be in the business of killing our enemies but if we are going to be trying to kill our enemies then let’s do it as efficiently as possible. And I think machines offer a path for that. And they’re only going to get better at that.
Stan: well, I’m sort of in agreement here. I presume that the IBM Watson machine could be programed. If it can beat people in Jeopardy, it can probably beat those guys at their control boards who are controlling drones that are going to launch missiles when it’s being done at a different facility which my understanding was that’s how most of the drones are run. The ones that are used to seriously kill people. I don’t doubt that they could probably have a program that takes in the information that’s coming in from the camera more rapidly make a decision. Because the decision been made to sort of get someone, the final decision already been made, it merely a matter of deciding when is the proper time and how to aim the missile. And I don’t doubt that a program could be written to do that better than people. And if it works better, and if someone has made the decision that we want to take out this one terrorist. Then it’s really just an in-between, it’s just a more efficient way of doing that. That’s a little different than saying we’re going, having the computer decide which terrorist to kill. Ok, that I’m not sure that a program could be written to do that right now.
James: I strongly disagree with that. I think you probably already do have computers that run bayesian analysis that say ok this is what we know about this person ,what are the odds that this person will try to kill an American?
Stan: I don’t doubt that we have programs doing that but I don’t think they are entrusted in making the final decision. And I’m not sure we want to do that at this stage because even if you run bayesian analysis on this or not. It may be very helpful but it still doesn’t give you the full cost benefit analysis, you wanted to do. What the blow back there might be from destroying this person and what not. Which is something that the computer is not going to take into account, because they are not broad enough yet.
James: Certainly right now a computer could not make the overall decision. But you could say to the computer, come up with 100 persons most likely to harm an American and we will kill them or will kill people where the odds will think that they will end up being a terrorist are up above 80 % so long as that number doesn’t exceed 1000 or something. . So you put in those perimeters then
Stan: We are in disagreement here. You are admitting that we are not ready to let the machines to make the final decision there.
James: not yet.
Denise: Well, there’s this interesting Department of Defense document Directive 3000.09 that is mentioned in this article. Released around a year ago. 15 pages long, and defining an autonomous weapon. And its definition is that “it’s a weapon that is once activated can select, how does it do that? can engage targets without any further intervention by a human operator.”
Stan: That’s a definition of what it would be, it doesn’t mean, and you’re saying that is was from 15 years ago.
Denise: No, its 15 pages long.
Stan: I would be curious to know whether or not if anyone in the military office has authorized the use of such a weapon and my suspicion would be that they haven’t because they don’t trust those weapons, yet,. They don’t trust the algorhythms that would be used.
James: Right, I think we will probably soon have them where you could imagine a robot and a robots going with soldiers. And the robot looking and its able to detect signs of whether someone is going to shoot at the soldier and it will shoot first. So that actually could end up saving a lot of lives not just of our soldiers but our soldiers would be less likely to quickly open fire on someone who might be innocent, if they know the computer is there. And the compute will be able to protect them is somebody sort of jumps out in front and starts shooting at them
Denise: Right, it might assist with friendly fire type of situations as well. Eva, what do you think about all of this?
Evan: Thinking about these things. First, of all isn’t it interesting how directives like this have really intimidating titles like 3000 .09 or whatever? It’s not something fluffy like the open internet order. What I thought about is we automatically go to the bad application of this technology. It’s like oh my gosh, a machine is going to make the wrong judgment about who to kill, but there is the possibility for it to make the right judgment in a situation where the human might have made the wrong judgment. And I’m thinking about this situation you see a lot in police training where police are run through simulation where they are apprehending a suspect, and the suspect puts his hand, and his hand reaches back into his back pocket as if to reach for either a weapon or to reach for a wallet. There’s that split second decision, where the officer has to decide whether to open fire because the officer’s life it threatened. The machine may very well be able to run a bayesian analysis or what have you evaluating facts, come to his, its. Isn’t that weird I referred to the computer as a “him.” Using an analysis in real time, may inform the decision to not open fire on that target in a way where a human would have made an error That would help in friendly fire situations but also in situation where it’s not necessarily a friendly target it could be an enemy but one that is willing to, ready willing and able to hold up the white flag in surrender.
Denise: Alright, well obviously things that are armed with guided missiles are dangerous, and we want them controlled in a way that keeps the most people safe. Same thing goes with cars full of gasoline, and able to propel themselves at high speeds into people and other cars. We’ve had the topic of car automation come up a lot on this show. Recently we’ve had a couple of stories on it, this week another article on The Verge about connected cars in general and the fact Chevy now has cars that come with its own apps store and also LTE connectivity in the car and specifically on the question of liability. An accident in San Francisco on New Year’s Eve, where an Ubercar driver, and I know Evan you’ve talked about what could go wrong with these unaccredited Ubercar drivers. Who is being sued for wrongful death, he is also being criminally charged with reckless operation of a vehicle because he hit a 6 year old girl and killed her in an intersection on New Year’s Eve in San Francisco. So, the lawsuit, the wrongful death suit, against him is not just against him but also Ubercar. And claims that Uber has engineered its whole automation system badly and alleges that the Uber app and driver interface requires drivers to use the app in a way that violates California laws against needing to be hand free when you’re driving. So, how does car automation factor into our singularity and smart technology discussion today? Evan, we will start with you.
Evan: Well, talking about the Uber lawsuit for the guy that hit the 6 year old girl. There’s a lot of ways that that’s just a relatively straight forward wrongful death involving an automobile case. I strongly doubt that when the family went to their aggressive plaintiffs’ lawyer working on a contingency fee basis most likely. His first thought was not to say, “Oh, its Uber’s fault for this new-fangled technology that’s doing there. He’s just suing Uber because Uber has the deeper pockets and under the theory of respondent superior, the employer is liable for the conduit of the its employee if it was within the course of employment and the employee wasn’t on a “frolic”, the word that the law likes to use to exclude that respondent superior liability there. So, I think this is probably that the fact it was Uber was sort of an afterthought not the main part of the liability theory that the plaintiffs have here. As far as the other aspect of this question, will connected cars be safer? I was sort of felt like I was misled when I actually read that article, because I thought it was going to be actually talking about the real innovations and safety that will happen with connected cars. You’ll be able to not be the subject of fatigue, at least not your ability to drive, move down the car won’t be lessened because fatigue and things like that. That’s the way that connected cars actually will make driving safer. It’s like you’re messing around with the apps on the dashboard, those seem to be two very separate issues and unfortunately sort of obscures the real question of safety when it comes to connected cars. I think there’s some wonderful innovations there because humans are bad drivers and I wonder if James and Stan will agree that there actually are algorhythms within our lifetime that will be developed that will make cars truly safer, self-driven cars truly safer than what a human can do. Take what you will from that.
James: Yeah, it’s almost certainly we will very soon have self-driving cars, I think that will be one of the most successful commercial innovation ever, because when you factor insurance estimates, it would be cheaper to have a self-driving car. But when I started writing my book, I actually thought that the AI that might be used in self-driving cars would play a big role in the singularity but unfortunately I talked to someone in Google who works on AI and he told me that no AI doesn’t really play a role in self-driving cars it’s mostly an issue of mapping software. So AI doesn’t really or at least the AI problems have already been solved for self-driving cars. There won’t really be much to advance that technology.
Evan: That’s interesting.
Stan: I expect computers should do a better job driving than people, in principal, because look driving in not something that requires a lot of conscious effort. You’re sort of on autopilot when you’re driving most of the time. A computer should be able to do that much better. The problem that I foresee, is one of the components that are being used to look ahead. I mean just knowing what road you are on and how to get some place, that’s easy and that could be done basically with practically with a GPS now. The bigger problem is checking for somebody being on the road that’s not supposed to be there, a pedestrian, bad weather. I had a car that when you’d go into the auto drive, at the particular speed if the car in front of me slowed down I’d slow down, if the car in front was going faster I’d go speed up. That was all very nice but it didn’t work in bad weather. It’d shut itself off, it’d start beeping, I presume this has gotten better, this was a few years back. My new car doesn’t have that feature so I don’t know how much they’ve progressed. But the one place where you might need equipment that’s fairly expensive and this is where that comes in; that can get through the really bad weather. And also the other thing is that you’d be able tell it the difference between a car that’s in the road or a ball that’s going across the road or a kid that going across the road. That’s the more complicated part here, I don’t doubt it can be done, and that it will be done in a few years. And it will probably be done cheap enough that, when it shows up in cars it will have to be cheap enough. But there will be an expense in getting sensors that work well enough in enough conditions to see that but I do expect that will be something. It’s a very simple, driving a car is a very simple thing. Most of the time.
Denise: Right, and I guess as I read through this article in The Verge and I listen to US Today, I’m sensing we are all in for this timeframe of “an annoying thing happened to me” on the way to the singularity. Where as they try and make cars more and more responsive and intuitive, they’re going to be asking you more and more questions such the one mentioned in this Verge article. It notices that you’re out of gas and the car chimes in, “shall I navigate to the nearest gas station?” You’re going to have to tell it yes or no. I already have a Ford with a lot of automation and I’m finding myself constantly having to tell the car “no” when it tries to intuit things, so we’re far from a perfect system of having them be able to read our minds. Aren’t we, James?
James: Yea, we certainly are, we are nowhere close to having full l human level artificial intelligence.
Denise: Yep, alright, what shall we move on to before we stop talking about robots and drones and things? I can’t not talk about 3-D printed drones, which I found in Evan’s twitter stream this week. He says that they have been invented solely to make a trend watcher’s head explode due to the convergence of controversy. I think this is so funny, Evan and that they could only be better if we could buy them with Bitcoins.
Evan: Right, and if you were using it to track down Ed Snowden or something like that, we could work in a few things here.
Denise: They are a real thing, right? I mean 3-D printed drones are actually being manufactured. There was one on Kickstarter that looks pretty cool.
Evan: Yeah, yeah looks like they are a thing. It makes sense, I mean why not? There actually was a military implication with this. We were talking about automated killers but you know, we could even bring in that issue pretty proximally with this as well. A general could be sitting in a command center somewhere and deploy an army of drones that at that moment doesn’t even exist but be printed up somewhere and could go and attack. So there are some interesting implications about this. But it is a convergence of two very controversial elements but seems like a natural convergence. There’s not really not much more to say about it. Not too many new issues immerge really out of it that are not already built into the controversies of 3-D printing and the drones to begin with. The 3-D printing, there are a lot of interesting intellectual property issues they will still have to sort out and drones, of course, they will have to sort out all those privacy implications among many others. Those are just the two biggest issues with all of this. I had never actually consciously thought about the two elements being combined, but once you read that article, it’s like, yeah. It’s only natural.
James: They have the potential, I think to do a huge amount of harm. You could imagine if an ordinary person could build a 3-D drone, and put a little bomb on it and send it off to kill someone and not get caught, it would destroy civilization. Certainly, a famous person could never ever be out in public. If anyone could kill them and get away with that.
Evan: Right, we see implications, indications of that now. Was it Angela Merkel that was in public somewhere and someone flew a drone.
Evan: So yeah, the threat is real now it would appear. But enough of being scared of the future for one day.
Denise: (laughter) Oh, I don’t know we’ll see if we manage to be any more scared. Stan, any more thoughts on 3-D printed drones before we move on?
Stan: Well, I actually didn’t read that article. I’m sort of skeptical. You’d have to have a pretty good printer or group of printers; to be able to make the parts and then put them together into a functioning drone. I actually won’t believe it until I see it. And where’s the bomb making 3-D printer? After all, if you can make the drone what’s the good if you can’t make the bomb? That’s the more interesting question. Which I also don’t think is likely to be something that’s going to be all that easy to do.
Denise: Well, the guns are out there. Right?
Stan: Guns are easy. Guns are really pretty easy. The bullets are already made, they not making the bullets, I don’t think, I don’t think they’re making the gunpowder. And so it’s just a trigger that sort of hits basically and makes a spark. So, that’s not a problem. There’s a world of difference between a gun and a drone that can fly.
James: There’s already so many guns in the United States that won’t really radically change things.
Denise: Alright, then let’s move on to a friendlier topic. The topic of the friendly judge. This is a story that involves the social web. #socialweblaw: bump, bump
Evan: Oh wow a “bumper: Socialweblaw- It’s been a while.
Denise: Yes, I’ve got a few “bumpers” coming up here. Evan, this is one that you wrote up on internetcases.com, fascinating story about a family law divorce dispute where the judge had made a friend request to the, a female judge actually had made a friend request to the female litigant while things were pending. The husband wound up with the more favorable property settlement, and on appeal this friend request wound up being a pilotable factor in undoing that result or at least revisiting in. Right?
Evan: Right, yeah, you’ve laid it out very well there. The precise issue before the appellate court, was whether or not the trial court judge overseeing the divorce committed error in failing to recuse or disqualify herself in the proceedings after it was brought to her attention that everybody now knows that you, judge, tried to friend the wife in the divorce proceeding here. So the publications in the “Bar Journal” and everybody, all the commentators struggling with the appropriate use of social media by judges and lawyers and litigants just really eat this kind of stuff up. These kind of fact patterns that especially when you have a appellate court telling a judge that she did something wrong, and it seems like these sorts of situation get a lot of attention when here, we have an example of where a judge messed up. So it’s not too surprising of a result, it’s very much of a common sense result that a litigant, this is the language that the court used, this cliché was put between a rock and a hard place. Whether to risk offending the judge by not accepting the friend request or engaging in an exparte communication that would jeopardize the results of the case if it had turned out to be favorable to you. So perhaps even the more instructive part of the case actually came in some dicta where the court was talking about, now this is very much different than the situation whether a judge is friending the lawyer that is representing the client. There is a distinction that the courts will make in the analysis between whether or not if it’s okay for the judge can friend one of the parties directly versus whether to friend one of the lawyers. And the courts said, you know think of the situation in small counties where the legal community is very tight-knit, I’m thinking back to my hometown, Beckford, Indiana. I was at an event this summer, where one of the well-known lawyers in town was sitting at the same table having lunch with the circuit court judge. So, that’s it’s an everyday occurrence there, so it would be a nocuous to prohibit a Facebook friendships on that layer or level to take place. But a judge friending an actual litigant is not best practice.
James: I don’t agree, I don’t think it’s that big of a deal. Facebook really aggressively tries to get you to meet new friends. I don’t know what happened in this case but I could easily see the judge could friend a litigant without even knowing he was doing it. She was in his email list, or she was connected to someone else. I mean it’s not that big a deal in Facebook when you friend somebody. I remember when Facebook first came out, a student set up a Facebook account for me to show me what it was all about. and she went about friending some people, and I teach at an all-women’s college so she was friending like 19 year old girls in my name, and I’m like “wait a minute should you be doing this, wont they think this is creepy?”
James: she said, “oh no, no no, this is no big deal, don’t worry about it. They won’t think there’s anything creepy about it at all.” It isn’t, I don’t think. I think judges are now going to be afraid to be on Facebook because that they might accidently hit a button that friends somebody who’s a litigant and now that litigant has an option of getting an appeal if she loses the case. I think in this might be a case, if you lose the case, you go to your lawyer, and gee what I can do. “Oh the judge friended you, you were really upset by this aren’t you? Oh, yes I was. Okay, now we are going to appeal it.”
Denise: Well, I was thinking this would be a good opportunity to get an end to the topic the chat room was asking about back in our discussion of the singularity. That is how long until AI takes over the function of lawyers, and judges, and the judicial system. And this might be an instance where, like with cars, and war related technology, maybe an artificial intelligence wouldn’t have made this bad decision but if it’s something that is innate to the functioning of Facebook, maybe even the AI would be taken in by it.
Evan: Well, and AI wouldn’t have been subject to the negative implications of influence, right? And what I mean by that is, sure the judge, as robot, as AI agent may have friended the litigant but so what if the friend declines or accepts, that’s not going to affect the objectivity the way that it would a human. They would go and enjoy a sunset and be offended as the litigant didn’t accept the friend request.
Stan: well, that really depends on how the AI judge is programmed. If the AI judge is programmed to tit for tat, and if someone doesn’t accept your friend, then you give it a worse ruling than it’s just like a human being. So, you’re assuming that the people that are programming this are programming it to be completely objectively to be a wonderful judge, which may or may not be the case.
Evan: It would be the government programing it, of course so it would was going to be perfect.
Stan: I actually think that overturning the previous verdict isn’t enough. I think the judge should be censured for what they did. If this were, I don’t know the sexual orientation of the judge but if it were a male judge asking to be friended with a female litigant, particularly if she was young and attractive, the normal thing would be to wonder if in fact they’re not trying to sort of cozy up to find out if there would be some way to have some sort of a sexual relations later and that would be something that everyone would frown on. We don’t know the motivation, but judges should keep their noses out of having personal interactions which is what I view the friend thing to be with the litigant, and I think they should do more than overturn it. They should be telling judges to keep their noses out of Facebook just as you shouldn’t be having a drink with the litigants in a case that you are hearing or offering to have a drink.
Denise: James, back in during our singularity discussion someone in RC raised that quote, the Robin Williams’ quote from “ Dead Poet’s Society” that goes “and medicine, law, business, and engineering, these are noble pursuits and necessary to sustain life, but poetry, beauty, romance, love these are what we stay alive for.” So does this mean that as we have this discussion medicine, law, business, and engineering are going to be the purview someday of more intelligent machine intelligence than we frail humans are capable of carrying off?
James: Yes, but actually so will beauty, I would predict that within five years that most women in pornography won’t be real women that they will be computer designed creatures that men perceive as more beautiful than any real woman ever has been.
Denise: Well, that’s the case now, right?
James: Well, yeah, so beauty is computers are taking over beauty. I think computers will be able to make music, and art that we perceive as more beautiful because they will be able to find mathematical basis for what we believe to be beauty and great things that match it.
Evan: Well, but the real issue seems to be in what’s in the eye of the beholder. And the ability to perceive beauty. Sure, an automaton can create beauty it can be programmed to create beauty but can it appreciate beauty. And I think this is the second or third time we are kicking around this real. . Go ahead. .
James: I think you, what do you mean by the word appreciate, I mean, state a test and if the computer can pass the test, it can appreciate beauty if it can’t then it can’t, the test is something we can objectively determine if it passed. I think that what you want to say is that humans are special, and I’m going to protect the especially specialty by using this sort of vague criteria that can never be falsified.
Evan: Right, I feel like I’m talking to Dan Denit. Because there’s this notion that this phenomenal logic problem isn’t really there. But it can, it’s the essential problem of knowing what’s going on in a person’s mind. I’m just really skeptical, just intuitively I don’t know how to express this scientifically, that’s there something different that goes on in the human mind, in the human intellect when it perceives beauty that would be very difficult if at all possible to create in a machine and what we are kicking around here, is we’re just digging around the edges here and we haven’t touched on this, and if I bring this up I would be sad that we don’t have three more hours to just talk about it. We’re talking about consciousness here. And what it means to be conscious. to enjoy and that’s where a bunch of discussion goes, about rights, and process . .
James: The mind is the brain, the brain is your cells and the cells are machines. If you can perceive beauty and understand it, then a machine can necessarily perceive beauty. Unless you’re going to say, that we were created by God and have souls and we can never replicate the souls then no. the human brain is a machine and so whatever the human brain can do machines can do.
Evan: I mean Mr. de Carte, seriously, do you really mean that?
James: Yes, of course.
Evan: I mean is it that reductionist? Are we, I mean there are no . . .
James: We are made of atoms. I mean, I’m an atheist I don’t think there is anything, no supernatural about my brain. It’s a machine, we are learning more and more about it. There are about 50 thousand brain scientists in the world, they haven’t found a part of the brain that like glows for no particular reason. And it’s clearly supernatural.
Evan: Right, fair enough.
James: We don’t fully understand how the cells work but they appear to be machines every bit as much as a much as a car or a computer chip. The brain’s consciousness is mysterious but that’s because we don’t understand it. Consciousness is being mysterious is a function of our ignorance and not a function of consciousness.
Stan: Actually, I thinks it’s a little more than that. There’s what is going on in the brain but then there’s all these chemicals and hormones and what not that are released and float around the body that are interacting with the body. That have an effect, so it’s not just the brain and what’s in the brain by itself. There’s all these other interactions going on as well physically. Now, maybe you can get a computer that also emulates that component. But it’s not just the pure thoughts going on in the brain because we are more than just the brain. And so there’s all these other chemical interactions independent of the thinking. That’s what I think is some of the difference. So let’s say we’re not talking about beauty that is pure esthetic. Let’s talk about sexuality, you see someone who is beautiful and you get all these physical reaction, how are you going to get the equivalent of that into the machine? It’s not clear whether the machine can get turned on or not.
Stan: Ok, the machine can say, I am turned on because you programmed me to say that but is that really the same thing as when a human being gets turned on? The answer is, “Well, it doesn’t have all these other chemicals going on, going through its body, having other physical impacts on what’s going on what’s going on, that then it feeds back to the brain. So I think by looking only at the organ of the brain by itself comparing it to a computer, you are oversimplifying a lot of what’s going on, even if you don’t, you are a complete atheist, even if you don’t think there is nothing supernatural about it. It’s still something that could be hard to do in a computer program by itself.
Denise: Yeah, and what I would add to that is just that the notion of beauty are so subjective and ever changing too. If we in fact we wind up living in a world where as we see today , images of beauty are more manufactured than they are natural. Maybe there will be a backlash and those images will no longer will be considered beautiful. And so if we are trying to get a machine to emulate human responses, the programing would have to constantly change and be subjective as well. Do you think you can duplicate that, James?
James: I don’t agree with your assumption, and that what we find beautiful is determined by evolutionary selection pressures. Men find women beautiful if the women would make good mates if they were cavemen. Living in cave man and cave woman times, so I have a feeling the standards of beauty haven’t changed all that much. You have to make. .
Denise: I don’t know. Well. I’m going to make “Turned on machines” our second MCLE pass phrase for this episode of “This Week in Law”. I’m going to move on to another story, that Evan wrote up at internetcases.com this week. This time on the copyright front.
COPYRIGHT LAW: Presentation with the FBI Warning label behind “Copyright Law” in the foreground.
Denise: Okay, Evan, fascinating story here involving Quentin Tarantino and most everything involving Quentin Tarantino is bound to be fascinating. Here Mr. Tarantino had a script for a Western, that he was sharing with a select group of people and one of those people seems to have leaked it and it got out on the internet and Quentin got very upset about that in general and got upset in specific in particular at dockermedia. Because they published a link to where one could read the script. So this is attempting to be liability simply for linking to allegedly materials in an interesting kind of wrinkle. Can you shed some more light, Evan?
Evan: Sure, we don’t talk about Gawker enough on this show. Remember last week we were talking about the Hulk Hogan sex tape, in that issue that was Gawker, so here we are talking about Gawker again. Gawker published an article that had a link, actually over time they updated it to have multiple links to another online location, where an anonymous party who had received a copy of the script uploaded it to the web. Gawker posted these links to say that you can read it here, click on the link and go here and download it .And so I think that this happened exactly about a week ago, and Gawker published the story last Friday, I believe it was late last week, and then by Monday morning, Quentin Tarantino had prepared the complaint and filed it in federal court in California so you might know what his lawyers were doing all weekend. It’s a pretty straight forward copyright complaint but there is a lot of nuisances if you actually think about it. There’s claims of direct liability against the “John Doe” defendants, the people who actually uploaded it to this site, other than Gawker, the site to which Gawker provided the link but there’s also a claim of infringement against Gawker itself, where as you said, Denise for publishing the link to the infringing material, and it’s a theory of secondary liability against Gawker. Tartentino is not alleging that Gawker is actually infringing the rights, but it’s alleging that Gawker facilitated and caused this direct infringement by, actually the direct infringement would be where people downloading the script once they click on the link. So, it evokes some notions of the Gawker case. We’ve talked about already on this show once today, where there one induces another party to infringe if the object of the distribution of the device or publication of the website, we could say. The object to that is to have direct infringement result here. So, if this case doesn’t settle we could see another interesting decision coming from the courts about what Gawker type liability means, we already have guidance since Gawker itself. Gawker came down in 2005, in 2012, 2013 there were a couple of different opinions about the Iso Hunt case, Columbia Pictures versus Fung case. So, we have the ninth circuit has said something about what it means to be liable for copyright infringement just for posting a link for something. Because posting the link itself does not implicate in the exclusive rights of the copyright holder but so you have got to look for other way to find liability based on that conduct. So, it involves Quentin Tarantino so, yeah it’s going to be interesting. And Quentin Tarantino and Gawker two very colorful entities something fun ought to happen with this.
Denise: Yes, but the linking liability thing is important it implicates just about everybody who writes on the web. And certainly people who are engaged in news or news-like gathering activities. Stan, would you be disturbed if Gawker were found liable in this case?
Stan: No, I don’t think so. Gawker knew that this was a purloined object. If Gawker just said, “Oh, we came across a website that has something that is interesting and we are going to link it, that’s one thing. But, as I understand it Gawker was fully aware this was a stolen object and now it’s the type of object, it’s not clear to me that it diminishes that much in value just because it becomes available on the Web. It’s not something that you can necessarily get a lot of consumption value. It’s not clear that it’s a competitor for the movie that would have been made based on it. So, I’m not sure that the damage of it would necessarily be that great. But if in fact, it was really going to damage those potential work then Gawker has to make the decision; do we want to give access to this purloin item or not. And if we know it’s stolen, why should we be helping out the thief and hurting the person who it was stolen from. So, on a purely moral level forget about if it’s legal or not, I have no problem with Gawker being found liable. They were trying to benefit themselves financially by putting up links to get a bigger audience base. And a bigger audience base will increase the revenues that you generate and you are doing it at the expense of someone who has had their object stolen from them and hasn’t put it on there voluntarily. And I don’t think you should be rewarded for engaging in an activity that’s sort of promotes in a way, indirectly the behavior that we don’t want to occur which is the stealing on the object in the first place. And legally, from what I gather, I suspect that there would be a case to be made. And, part of it depends on Gawker’s knowledge and actions and behavior. In the Gawker case, and a lot of these cases where you had file sharing. It was very clear that the company’s business model was, we’ve got stolen stuff here, we will help you get it and we’re going to make money off of it by doing that. and their defense was, “oh, there’s a legitimate reason doing that, because people who put up music here that is non-commercial or that they have put up here they really don’t care, they are perfectly willing to give it up to you they’ve given up their copyrights, or its open source. And this good stuff will come and say it for the bad stuff but that was not clearly their model was about. And it is clear, that Gawker full well knew what it was doing and I don’t see any possible positive side that they could claim, that they were providing a benefit to society in doing this.
Denise: James, do you have any different take on this?
James: A little bit, I do agree on what they were doing is, what Gawker did is immoral. I just worry about the chilling effects. That it would be so easy to accidently link to something that was copy protected. And I think that even if you can prove that Gawker knew what it was doing, and it realized that was helping to violate a copyright law, there’s going to be a lot of people who are afraid that if they link to something without checking it out too carefully, someone in the future could come after them. And then you will have people playing games, you will have political parties putting copy ready material in places where they hope that the other side links to it. Your political enemies linked to something and you could claim copyright protection. Your legal eagle will go after them and this will sort of scare people and hurt communication and the internet. Frankly I would rather have the copyright law weaker and have less protection than have shelved free speech rights.
Denise: And the only other thing that struck me about this, Evan was your coverage and also Eric Gardner’s coverage over on “Hollywood Reporter Esquire” was the emphasis is laid on the “click here” or “we have it here”. Which the complaint seems to be attempting to imply means, we have the document and we’re going to give it to you and not pointing to an intermediary third party, and I just hope that the court doesn’t get confused by that language.
Evan: well, I mean, let’s give the court the benefit of the doubt. That this is the inducement, like-Here it is, even if you don’t know the nomenclature of hyperlinks that it really means over there. So, I don’t know, I’m optimistic that the court will see. We’ll see, I hope that the court will see past all of that. But as far as what Stan and James have said, I concur in as much as Gawker shouldn’t have done this and is liable for it under the secondary liability. I would just say I would come to that conclusion without using terms like theft or immoral, because we can talk about this in terms of copyrights being more in the form of trespass. And morals, who cares about morals when we’ve got the law.
Denise: (laughter) exactly. Alright one last story and then we’ll be out of here. This one as was the last one, is entertainment, but we are going to play the “bumper.” ENTERTAINMENT LAW: All right, so this is actually a quick kind of note, in the DISH Hopper feature suit, actually there are a couple of those suits are going on. This is the FOX one, which has been successful thus far for DISH in defeating a preliminary injunction against the feature so the lawsuit still goes on, they/re going to get to the merits of the suit but on the question of whether the feature can continue operating while the parties are arguing over it. The trial court said, yes, the appellate court said, yes. And the appellate court just recently said “yes” again. That’s it’s not doing to review its own decision and it’s going to let the feature remain while the parties are litigating. Stan, you found this a fascinating point of discussion, so tells why.
Stan: Well, it’s really the underlying idea of what’s going on more so than the particulars of whether or not they’re going to allow a preliminary injection to go forward. And, that is what you have here a market that goes forward. People watch programs and programs are supported by advertising. The viewers are happy they are voluntarily watching, they are getting benefit out of it. The broadcaster are happy, the transmitters are happy, because they are getting paid by the advertisers. They are getting paid more than for the programing otherwise they wouldn’t be in business. So it’s a win-win, everyone is happy is like a consumer market. The consumers benefit and the producers benefit, but this activity that’s incurring is making consumers happier because they would rather watch the programming without the advertising but if all the consumers do that they’d destroy the market and everyone is worse off. And so the question is, should the law be trying to keep this market that makes both parties better off from not engaging in an activity that will destroy the market and therefore make them both worse off. When one of the parties thinks that it’s going to make item better off by what they are doing and wants to eliminate the ads. That’s sort of the interesting theory, can you should you try to prevent behavior that’s going to make the party worse off where party doesn’t fully understand that because they think that they can free ride. So, that’s sort of what’s at issue here. From an economic point of view, should you allow it, you can make a strong case that if in fact everyone west to use something like “The Hopper” and if everyone could remove the commercials they would all be better off if they could collectively agree not to do that. And they should all be in favor of a law that doesn’t allow them to do that. Therefore should that be the way the law should rule. So, to make a point of view, it is sufficient for them to actually come up against Hopper. Now, that doesn’t mean the law will do it, and it’s not clear from the preliminary injunction which needs pretty strong evidence that they are going to win to go through, but the actual economics of what is going on is, you’re better off here preventing from doing what they think they want to do because it make a world worse off.
Denise: Yeah, it will be interesting. I haven’t read a whole lot of the briefing of the case, yet and we will more substantive briefing of the case as the case goes forward. Whether that public policy economic argument gets made in addition to the straight copy right infringement claims that seem to be also kind of problematic in this case but being asserted. Anybody else have any thoughts on the DISH Hopper case before we go ahead and wrap. Evan?
Evan: No, nothing that I can add substantially to this, it’s just interesting to see how this plays along when the saga with Arereo and the IVTV part, and emerging technologies and the prodigy of Cablevision. It’s interesting to see how this area of the law will develop here.
James: I agree with Stan I think though if people are allowed to strip out commercials, you’ll have commercials that are embedded throughout the entire show. They will be impossible to get rid of and lower the quality of entertainment.
Denise: Yeah, and we see a bit of that already happening today as people already can skip commercials pretty efficiently. Let’s give you quickly here cause we’re going over our ordinary time but we’ve had such a good discussion here today, it’s been well worth doing do. So, I’m going to kind of race through our tip and resources of the week Our tip comes from Scott Meshone, listener of the show, constantly in IRC who decided it would be fun to do an ask Slashdot on the hypothetical question “what would you do, what would you suggest doing to protect your passwords from your own amnesia?” So how would you sufficiently secure them and make sure that you are going to know them should you somehow forget or not having a great way for coming up with them? And the great reason I thought that it was interesting and that Scott thought it was interesting, it that confidentiality that a lawyer can provide for this sort of thing, so a lot of suggestions involved sort of paring up your passwords strategies and having several lawyers have a piece of the puzzle to bring you up to speed in the event of some type of a catastrophe so check it out in more detail all the links to our resources and our discussion points we’ve covered today are at delicious.com/thisweekinlaw/144. And this Slashdot is a pretty interesting approach to your password strategies. Our resources revolve around our great guests today. We’ve already talked a lot about the first of them, Singularity Rising by James Miller. I just got word during this show we’re definitely putting the book in the running for the voting by my book club, so I’m hoping that I’m pulling for it that we will get to read that one and discuss it. And I also wanted to highlight from Stan. Stan did a great definition, lengthy more article than anything else on Intellectual Property and specifically focused on patent and copyright in something called The Library of Economics and Liberty: its concise encyclopedia of economics.” So, Stan, it seems like you might have written this a few years ago, is that right?
Stan: Yeah, you can tell if you read it. It’s a few years old.
Denise: Just because the current events or the current issues or controversies are that you mentioned at eat end are not all that current anymore. But it’s a great discussion and for anyone who wants a great overview of the history of the copyright law, and the ramifications of it. And the same thing with patent, it’s a great resource. And also wanted to highlight the Center for Analysis of Property Rights and Innovations which issues grants for research projects that may be of interest for people who listen to and watch this show. Can you just in a nutshell tell us what the center does Stan?
Stan: It supports academic research on intellectual property types of issues. I might mention as well, there’s sort of a updated version Intellectual Properties, that the Milken Institute asked me to write up for them for their latest issue of their magazine called “The Milken Review” and so I talk about copyright there. And this issue is supposed to come out some time the middle of January, so maybe it’s already out. And that talks about current controversy, more current controversies if anyone was interested.
Denise: Wonderful, if you send me the link I will put it in our discussion points for this show so that people listening down the road as our intellects continue to evolve. We’ll be able to find that particular bit of information. Also the projects that you guys are working on now, at the Center for Property Analysis Property and Rights and Innovations include online, offline channel synergy as evidenced by the video rental market. That sounds pretty interesting and some other great topics over the too. So, if you’re someone interested in academic research I’d encourage you to check this out. I just can’t thank you guys enough, Stan and James for being with us today. Really wonderful discussion, wonderful opportunity to share your non artificial intelligences with our audience. Stan, anything you want to leave our guests with, anything that you’re involved with, that’s coming up that you want to highlight before we get on out of here.
Stan: Well, it not appapro to what t you guys do but I’ve been spending the last year, I basically spent the last year writing a book about research fraud. And it academia, I don’t know when it’s coming out I don’t know who the publisher will be yet, but I presume that if you had any interest in it or not check my webpage every once in a while and you’ll find out.
Denise: Wonderful, thanks so much. And James what about you?
James: If you are interested in working towards a good singularity. The best organization is the Machine Intelligence Research Institute (MRIR) they are trying to develop a mathematically provable, friendly AI. That can basically bring us to utopia.
Denise: Nothing wrong with utopia. Evan, its utopia every Friday when I can get together with you and chat about such heavy issues and angels dancing on heads of virtual pins
Evan: (laughter) Right, the same here. Yeah, I really enjoyed it and this has been a really great conversation. And I have enjoyed it I wasn’t joking about wanting three more hours to discuss consciousness and things like that. We touched on some great issues and I hope we can continue it someday.
Denise: Me, too. Alright guys, will let you go and enjoy your weekends. I’ll just mention that this show records live at 11 o’clock Pacific time, 1800UTC on Fridays. You don’t have to join us live it’s just fun to do so. We love it when you jump into IRC and play along with us and feed us information, and suggestions and questions and corrections. If you can’t do that you can find this show on TWIT.tv/twil.com, @Youtube.comthisweekinlaw, we are on Roku, we’re in iTunes, however you like to consume your netcast entertainment and education you’re likely to find us there. Check out the twit.tv/twil page for all that information. And get in touch with us between the shows for sure. I’m email@example.com; Evan is firstname.lastname@example.org let us know your Asks Slashdot questions, and guest suggestions, and topics suggestions, critiques we love to hear it all. You can do that all over on Facebook and Google and Googleplus as well. And we will see you next week on This Week in Law! Take care!