Transcripts

This Week in Law 273 (Transcript)

Denise Howell: Next up, you are going to hear This Week in Law. Today we are talking game intellectual property, phone location privacy, lots and lots about drones and robots policy and law relating to them, and smartphone locking laws in California. We've got Michael Froomkin, Padre SJ, Evan Brown, and me, coming up next.

Netcasts You Love, From People You Trust. This is TWiT! Bandwidth for This Week in Law is provided by CacheFly at CacheFly.com.

Denise: This is TWiL: This Week in Law, Denise Howell and Evan Brown, Episode 273 recorded August 29, 2014

Self Suing Cars

Help support TWiT with your Amazon purchases. Visit twit.tv/amazon, click on the Amazon banner, and shop as usual. It doesn't cost you anything, and it helps support TWiT. That's twit.tv/amazon. We thank you so much for your support. Hi folks, I'm Denise Howell, and I'm so thrilled that you are joining us this week for This Week in Law. We've got a spectacular panel of folks and some really interesting topics on deck. So let's get right to it. First I'm going to welcome my co-host Evan Brown. Hello Evan.

Evan Brown: Hey Denise, good to be here. Hey, did you see Padre is with us today? How awesome is that?

Denise: I did and it is beyond awesome. Hello Padre SJ.

Fr. Robert Ballecer, SJ: Hello TWiL folks. It's so nice to be here for this hostage exchange. You were on twice. Now it's time for me to reciprocate.

Denise: That's right, we had to bring you across many borders and through several earthquakes to join us today, but it's great to have you. For anyone who somehow has missed Father Robert Ballecer in your life to this date, allow me to bring you up to speed. He hosts TWiET, This Week in Enterprise Tech on our network in addition to Coding 101, Padre's Corner, and Know How. We are thrilled to have him, and all of those shows are wonderful. You can catch Evan and me on a couple of episodes of TWiET, just me on one. Really great to have you joining us on TWiL. I think this is your third time?

Fr. Robert: Yeah, strangely it has been a while, a few months since the last time. But I'm not a stranger to TWiL.

Denise: No, no stranger. Someone who has here to for been a stranger to TWiL as far as appearing on the show, but not a stranger to us as far as our following his work, and fascinating conferences, and everything that he is involved in is University of Miami Law Professor Michael Froomkin. Hello Michael.

Michael Froomkin: Hello.

Denise: Great to see you. Great to have you on the show.

Michael: This is great. Thank you so much.

Denise: We are so thrilled that you can join us. Michael is a very, very busy man. You might remember back in April when we were following along in April the doings at the We Robot Conference, which is the annual conference in robots, law, and policy. Michael organized that conference and is the founder of that conference, correct?

Michael: That's right. That's right. Of course Miami is such a big center of robotics that we have to have a conference.

Denise: That's right. Of course it is. I've always suspected that those people on the beach weren't quite real. So that explains much. Michael is also the Laurie Silvers & Mitchell Rubenstein Distinguished Professor of Law at University of Miami.

Michael: Do you know who they are?

Denise: No, who are they?

Michael: They are the founders of the Science Fiction Cable Channel.

Denise: Oh, how awesome. Of course, I would always say that.

Michael: Isn't that a great chair to have for a technology school? It's wonderful.

Denise: It's perfect. He has taught internet law, he teaches a lot of nuts and bolts law classes in addition to IP in the Digital Era, Internet Governance, Law and Games, we have a pretty interesting topic along those lines today, ecommerce, and of course is really interested in Robots, Law, and Policy. We can't imagine a better fit for our show, and we are so thrilled that you can join us.

Michael: I'm looking forward to it.

Denise: Let's start out; let's see what's up with Twitch and copyright issue that they have been facing.

(Music intro plays)

Denise: Some really interesting stuff going on with Twitch. I will start at square one for anyone who thinks that I am having some sort of speech tick or is not familiar with Twitch. They are a video channel that, Padre help us with the history here, weren't they part of Justin.tv originally?

Fr. Robert: Right, so it was a spin off from Justin.tv, which of course was at first an experiment to put someone's life on the internet. They decided that they wanted to go in a slightly different direction. Rather than just allow people to live stream their lives, they wanted to have a specific focus. That focus that they picked was gaming, so Twitch is a place where gamers of any genre, doesn't matter if you do role playing, if you do speed runs, if you do console gaming, or PC gaming, can stream their gaming session along with a little video of themselves live. You would be surprised how many people watch these streams. Millions and millions of people at any given time are watching people play Minecraft, or World of Warcraft, or Diablo. It's actually kind of addicting. I will admit to spending more than a few hours watching somebody play a video game. 

Denise: It's also educational. I have a 10 year old who loves Minecraft, and he doesn't watch TV anymore. He hasn't heard of Twitch, and I'm not sure that I'm going to tell him about Twitch because I've already lost him to several dozen YouTubers who do this kind of thing. He watches it so that he can play better in the games. He's getting pointers from professionals kind of thing. These people are professionals. They are monetizing these streams of themselves playing videos putting very high volume of this stuff out and making money off of ads and donations I trust. Some find some really interesting ways to monetize where they, particularly with Minecraft where you can build so many additions and augmentations on to the game. There are whole servers and things that you can join and you get special things at the server if you pay extra money. The YouTube Channel is the marketing platform for that. People are making money doing this, but there are a few issues. Evan, you first pointed me toward I think in your Twitter feed the fact that Twitch was having a content id kind of issue where their content id system was automatically removing audio from some of their videos, some very high profile videos, some gaming tournaments that were going on. This was very, very bad. They took a lot of flak for this. Do you have anything that you can add to what happened there?

Evan: This was all an interesting context for it. This was in the early part of August; I guess it's still August when we are filming this, the first couple of weeks of August when there was a lot of discussion around what's going on with Twitch? Is Google going to buy it? It is going to be subsumed under the YouTube brand? Since then we've learned that Amazon bought Twitch for 970 million dollars so there was this little transaction going on here. It's sort of interesting to see that there was a little bit of housekeeping going on here. It bears emphasizing; you said this already, this copyright controversy. Really what the controversy is dissatisfaction, outrage may be too strong of a word, that users of the video on demand service, not the live interactive part of Twitch. When people would upload themselves doing playthroughs of video games having either ambient music or having music laid down as a track with it, Twitch became a lot more restrictive, or less lenient of that, and was muting large portions of tracks that were identified. The technology was Audible Magic was what was going on here. Emmett Shear, the CEO of Twitch did an AMA on Reddit and sort of summarized these things and essentially apologized for not giving the user base more advanced notice of the fact that there was going to be these greater restrictions here. There is one part of the story that is sort of the outrage. We can look at it and say; well really Twitch was doing its best of complying with the law and to that end was doing a better job of complying with the requirements to sail a ship into Safe Harbor under Section 512 of the Copyright Act, the DMCA. Then there is also this interesting to see how it was played out historically how it was going on in preparation in closing of the deal that ultimately turned out to be with Amazon. So several different angles to look at here. Several different interesting things to see go on and us as copyright enthusiasts can recognize that it's very difficult to be compliant, especially on a huge platform like this.

Denise: The larger issue, besides the audio tracks, even if you are not adding any additional audio or other copyright and perhaps non licensed material to your videos, just having the playthroughs is controversial in its own right. Low and behold, we have a law professor who teaches a course on Law and Games. Michael, I'm wondering what you are thinking about this whole universe of people who are making, whether they are educational or entertainment oriented, videos about playing video games and including the play through video in there. Some of these games are actually designed with a recording feature built in as I understand it. It's almost hard under that circumstance to argue that there is not some kind of implied license, yeah?

Michael: Well, I don't know. It depends on what they put in the terms of use, right? They may say, well you can record it yourself, but you can't broadcast it. If it wasn't in terms of use last week it's going to be in there next week. I don't teach copyright; at least I don't teach it when I can avoid it. But you can't avoid it. As I was listening to this I was thinking about, as us law professors do, in two completely different ways. One is what is the law actually going to be, and the other is what would a sensible law be like? Of course, with the DMCA there ain't that much relationship between the two things. So if you start with what the law is going to be the first thing that I thought about was songs. If you write a song and I sing it everybody understands there is a copyright issue there. If I do a cover of your song in order to sell records then we are going to have a problem with that. How is a walk through really different from a cover? In a sense the game is the song and the walk through is the interpretation. It just shows you how our use of copyright is so different than the way that the law is going to think about things. Legally there is a pretty darn good chance that the law is going to say that it's a copyright issue. The next question is is that going to make any sense? Why are the people being harmed? In fact, I think that the smarter, more enlightened folks who sell these games are going to want encourage walk throughs because it builds a community and it builds demand. While there are a lot of people who obviously follow along on these walk throughs, the walk throughs aren't going to substitute from playing yourself. It's a very different thing. I'm not sure it's a sensible rule economically, but I think that it is the rule.

Fr. Robert: I absolutely agree with you on the second part which is that the more forward looking companies, the more forward looking studios are looking at this and saying that this is actually good. We want people to see our game, we want to get the exposure, and we want people to be wowed by the visuals so that they play it. On the first part, however, one of the difficulties that we are running into when we are trying to upload this content is that a lot of these studios are saying that it's not really a game anymore. This isn't Pac Man, this isn't Centipede. More and more of these games are interactive cinematic experiences. You play to a certain point, and you trigger a cinematic. That cinematic will be the same for everyone, so once you get to that cinematic I don't think that there is any longer any discussion on whether or not it's a performance or it's a replay. It's absolutely a replay.

Michael: It's the same thing with the audio track. They are playing music in the background. Same problem.

Fr. Robert: Right. So this is what is kind of baffling me about Twitch. I understand why they are doing this. They were clearing house because they knew that they were getting bought. They ended up getting bought by Amazon rather than by Google, and there are a lot of comparisons between this and the acquisition of YouTube. Back when YouTube was acquired so many naysayers were saying, well, how are you going to monetize this? All it is is a bunch of copyrighted material that the people have uploaded. Especially a lot of anime and the stuff that belonged to Viacom. YouTube was able to claim Safe Harbor because they said, no that is not the purpose of the platform. The purpose of the platform is for people to upload their own content. Those other people are abusing it. They were, more or less, successful with that strategy. Twitch is different. They do nothing but upload content that is copyrighted. They do nothing but upload content that is going to be challenged by game corporations. I don't see the same Safe Harbor argument. They can't say that the platform is being misused. That's what the platform was built for. Even though, I am with you, I think that the smarter companies should see this as a good thing. We already know Nintendo is going to go after anybody that uploads any game play from their games, and if Twitch is anything like YouTube then those pieces of content are going to get pulled down.

Michael: It's a big target. Especially now that that is the price tag. That means that it's a really big target, that's lawsuit bait.

Denise: It's a target on a couple of fronts because you have the site itself that is the larger target, monetizing all of its individual users uploading these videos, but the users are making a living at this too, or trying to. There are a couple of potential avenues of pursuit if you are someone in the market to bring a copyright lawsuit. Michael, you are talking about the law as it is and the law as it should be, are we going to see any reconciliation here? Clearly there is...

Michael: Ha!

Denise: Ha?

Michael: No, I think copyright has got the cards here, right? They've got the rule that they want; some of them are embedded in International Conventions or interpretations of International Conventions. Why would they want to compromise? Because from their point of view having a strong background is perfect. If they want to waive it for one of the enlightened companies they can do that under the rules. They've got all of the cards; they've got all of the choices. Especially in this political environment getting anything through Washington is really, really hard. I don't think there is much hope for change.

Denise: On that sad note, this is just to note in passing, a stat from YouTube, not just YouTube but across Google. TorrentFreak reported on this, but Google itself put its numbers out as part of its transparency efforts; it's being asked now to remove over a million items every day for copyright infringement. So the volume has just ramped up exponentially. Someone did the math.

Michael: I wish that said allegedly pirate.

Denise: Yes. Yeah, I think that's TorrentFreak's headline that says "1 Million Pirate Links Per Day". It's funny that they embrace that terminology. They write, “Put these numbers in perspective. Google is currently asked an infringing search result every 8 milliseconds. Compared to 1 request per 6 days back in 2008." Clearly people are putting the DMCA through its paces. Evan, what do you think that this means?

Evan: I just thought they didn't say allegedly pirate because these were pirates of the Caribbean and pirates of Penance, and stuff like that. Those are staggering numbers. Did you already say that that was 1 every 8 milliseconds or something like that?

Denise: Yes.

Evan: Is that what it is compared to 1 every few days in 2008 or something like that. Extraordinary numbers, clearly what we are seeing is a greater use of more sophisticated technologies going on here, both to scour the part of the web that can actually be scoured, this isn't even considering the dark web, presumably, so then it wouldn't be showing up in Google in the first place. That gives us a sense that this might be the proverbial tip of the iceberg, or the tip of the proverbial iceberg, I'm not sure which.

Michael: Can I interrupt you for a second? Why are they scouring the web? Aren't they just scouring Google?

Evan: Is there any part of the web that is not on Google? Duly noted.

Michael: That's what I'm interested about. How do they find the link? Do they find infringing content and then look for it on Google to pull it down? They probably go through Google just looking for the links. And they automate that. It would be really interesting. There has been a lot of antidotal data about people making false DMCA requests. It would be really interesting for somebody to do a grab and take 10,000 of these requests and figure out how many of them are legitimate. I have no idea what the number would be. It could be very high.

Evan: I guess I'm conflating Google and the web, but I think that it would only make sense to think that, to the extent that they are sending Google notifications of the links that are in the Google index then it's what Google has indexed on the web and not necessarily vice versa. I'm not assuming that Google has indexed the whole web. I think it's safe to assume that it's most of it. I guess I'm just emphasizing the point that there is a greater amount of sophistication, not only in the scouring that is being done, but it's amazing to think what Google is having to do here. It would not be humanly possible without essentially conscripting the entire population of the earth to review all of these things manually. Not only just to pull up the request that is being made, but to make some evaluation as to whether this is infringing content and make some thumbs up or thumbs down decision on whether or not this is something that might be qualified as Fair Use for example. I'm just now remembering I spoke at a conference in 2006; this was right when William Patry had gone over to Google. He's of course the famous copyright scholar and he very memorably said to the group that was assembled there that I was speaking to that Google does in deed look at DMCA Takedown requests. I think he was talking about Blogger. Then he makes a determination as to whether or not this might be infringing, about whether it might not be infringing, about whether it may be Fair Use. I suspect that that is no longer the case unless it is some much particularized situation. I can't even imagine what that would be that would get human review on the first round with these staggering kinds of numbers. It's just amazing to think where this could be going if we are seeing exponential growth if that growth is going to change. These numbers here indicates a greater sophistication in the technology, it indicates more, in a nod to you professor, allegedly infringing content online. We are just going to have to continue to see a greater advancement in these things to keep up with the requests. I think that there is advancement both on the perception, you know the finding of allegedly infringing content and that has to be matched with equally powerful technology to meet all of those DMCA Takedown requests.

Denise: The chilling effect of this on business must just be staggering. It would be interesting to find a way to measure that, although I don't know how you would do it. If you are someone who has a good idea about becoming some sort of net intermediary, whether you are someone like Twitch, or somebody else who is going to host content where users can be posting things that may have copyrighted material in them you have got to build this into your cost of doing business. I have got to think that it is just chasing away, it's got to be a staggering number of businesses. This is a cost that would be one where you would have to, as Evan just said, conscript every human on the planet to actually thoroughly review this. Padre, do you think that this is having a huge chilling effect?

Fr. Robert: It can, but the DMCA Safe Harbor Provision is actually pretty clear about who is and who isn't protected. You have to have an agent who is listed to handle any DMCA Takedown Notices. That much is clear. As long as you've got that, so that's already a cost, you have to have no knowledge or financial benefit from any copyrighted material that may be hosted on any services that you may be running. You have to have copyright policy on dealing with non-copyright compliant material that may be hosted on your service. As long as you've got that you've got a nice fall back to say, hey, I'm Safe Harbor. Now we know from experience that that is not necessarily going to defend you, but when I write up Acceptable Use Policies the DMCA Safe Harbor Provisions are always in there. I'm pretty sure that my Acceptable Use Policies are standard. I've seen them in many, many different places. So, is there an effect? Most probably in that it changes that particular byline inside of an AUP. But is there a super chilling effect? We haven't seen it yet just because the DMCA system is so broken and so clogged. As Evan mentioned, it's going to get even more clogged that I think it will eat itself before really big business and really big enterprise networks are going to be affected.

Denise: Well, I suppose one thing we can do is teach robots how to review DMCA requests and then turn them loose. We will talk about that in just a moment, but right now I would like to thank our sponsor for this episode of This Week in Law and that's you. You folks watching. You know how your can help? You can help by shopping at Amazon. You are already doing it. I know that you are. We all do it. Now you have yet another reason to consider Amazon in your arsenal. We have been talking about its purchase of Twitch. You know the price, selection, and convenience of Amazon. Now you can support us here at TWiT with your Amazon purchases. It's easy, what you need to do is go to twit.tv/amazon. Put it as a toolbar, bookmark, or however makes it easy for you to remember, hey I'm going to go buy that book, or piece of music, or piece of electronics. Instead of just heading straight over to the site, use our link, and when you do that it costs you nothing extra, and anything you purchase helps keep the lights on here at the TWiT Brick House, and helps bring you our show, and Padre's shows, and every other show on the network. So it's a real important thing just to remember. We know you are a fan, we know that you shop at Amazon, so just do it. Make it easy. There is so much great stuff that you can get there if you are not already. I am just so hook, line, and sinker a customer of theirs. I can't think of much that I don't buy at Amazon these days. So even if you are just the occasional shopper it certainly helps. Whatever you buy, please remember to click on that Amazon banner at twit.tv/amazon. It doesn't cost you a thing and it helps support TWiT. You can also bookmark that page and click through our link every time you shop at amazon.com. There are links there, too, if you are a person in the United Kingdom or Canada. That works as well. Use those links instead. Once again, twit.tv/amazon. Thank you so much for your support of our network.

Let's talk about robots and what they are able to learn. Was this at Harvard Michael? Or am I getting my universities confused? Someone is...

Michael: I don't think it was.

Denise: No, we have some swarming robots that are the product of that institution. But we do have some robots that are learning from YouTube videos. Maybe they are learning how to play video games since we know how many of those are out there walking you through. But no, this is Cornell, by the way. They have created something called a Robo Brain, which is an extensive database of YouTube videos created by Aditya Jami, stores information in a robot friendly format so that machines can pull from that database and learn how to do different kinds of tasks. We have a video playing if you are watching our live video right now of a robot getting some commands to make a bowl of ice cream. It's actually doing a great job. I noticed that it got a little drip on the counter, but I've got to say I've already mentioned that I have a 10 year old in the house and the robot is being far tidier than my 10 year old is when he gets himself some ice cream. Apparently they are doing a pretty good job of walking the robots through learning these tasks. Michael, you are a bit skeptical?

Michael: Well it didn't tell us how it learned. For me, if they are really doing it then it's one of the most exciting stories of the week or the month because we've already made a design decision for a lot of robots. We want them to be humanlike because our entire construction environment is constructed for featherless bipeds of a certain height and strength, right? So if you want your robot to get out of your door then it's got to be able to turn doorknobs. That's actually a fairly tough problem for a robot because doorknobs come in all shapes and sizes. You don't want to pull it off of the door, but you want the door open. You just have to figure out which way the door goes and all of the rest of it. So we are building these robots that are vaguely human shaped even though that may not be the optimum for every task because it allows you to make them for general purpose and fit in our environment. Well, our environment is really complicated, so the training and program aspect is that you have got to have robots that can learn. If they can learn, they really can learn from videos, and what is not clear to me is if these videos were purpose built by the researchers to teach the robot or if they are just pulling them off from the web through some process. I mean if it is the latter, if they are able to learn from videos that weren't designed for that purpose then it's only a matter of time before we have other programs sorting the videos to pull whatever ones which are good training and bad training. Do we really want the robots to learn how to shoot an oozie? Maybe not. But now that just opens up the possibility of having robots in a sense being programmed by things that weren't written by programmers. People just going around in their lives and doing ordinary tasks. That is a potential for a giant flowering of robot abilities. I guess even if these guys haven't figured out how to do it someone is going to figure it out eventually, and that is going to be a big deal.

Denise: You touched on an interesting law and policy point in this whole process if we are going to teach robots by one way or another crowdsourcing knowledge that they are able to get off of the web. It certainly raises concerns about what it is they are able to learn, and who they are learning it from, and what sort of parameters are being given for carrying out those tasks.

Michael: If you are a lawyer thinking about robots practically the first thing you think about is the guy that buys a general purpose robot and he tells it, well, go cut my lawn. While he is at work the neighbor starts teaching the robot stupid robot tricks, and that is nice and funny until he does one on the mailman and breaks his leg. Then figure out the liability for that. You've got the neighbor, you've got the owner who it unattended, you've got the people who built the robot, you've got the people who sold the robot, and you've got the people who've done the programming. It's what we call a target rich environment.

Padre: The interesting thing about this is that I don't really want the new generation of robots to be learning from YouTube. All they will be doing is dumping buckets of water on themselves and kicking people in the nuts. But seriously, Victor if you show that link that I've got, the university that's making this, they describe the learning process as the interpretation of the natural language text in the images of videos. This technology has been around for a while. It's kind of hit or miss, and if you scroll down they try to describe the clues, the little bits and pieces that they are picking up from images that you see. For example, they can figure out the spacial distribution for working activities. In other words, what does something look like when it's working? What does a person look like when he or she is working? It can figure out, oh, that's a part of a microwave that people interact with, and therefore that's a point that I should interact with. That's a point that I should manipulate. It can figure out things like well, when there is a TV here, and there is a human here and the human is at this angle it means that the human is watching TV. That's all well and good, but the problem with learning from this type of activity is that because not every microwave, every TV, every human, every device that it might interact with works the same way that natural learning has a natural stumbling block. The first time that a robot comes to a microwave that it hasn't seen or a microwave that works differently than a similar microwave it won't know what to do. I think that is the limit of learning. Is it learning? Not really. It's learning mimicry, but unless it's actually seen that exact situation it won't know what it is supposed to do.

Michael: The microwave example doesn't really worry me, because the microwaves are going to eventually have RFID chips or something that talks and everything else. They will tell the robot what model they are and there will be instructions built in. Or you can phone for instructions is probably what will really happen. It comes upon a new microwave, figures out what model it is, and then does what we do and goes and gets the instruction manual online. It is a purely roboticized instruction manual. So that's not the exciting part. I think, again, lawyers are always trained to think about what can go wrong. So think about what are the majority of the videos on YouTube, you know, online are about? And then try to imagine that if you don't have careful quality control for which videos the robot takes as input; I think that pizza delivery men are in real trouble.

Evan: I think a lot of this conversation is based on some assumptions that the knowledge of the robots so to speak, and I guess I should put that term in air quotes, that the "knowledge" is somewhat static. This is all pre, I don't know if this is the same thing as the singularity, but it's all pre that point in time where robots do not possess artificial general intelligence, the ability to rationalize, and think, and learn with the same kind of sophistication that a human does to learn how to solve problems. What we could see is this going a couple of different ways. I can think of three different ways. One is robots encounter doorknobs they've never seen before, and they don't know what to do, and they give up, but that's still pre-AGI because a human can figure out how to open up a doorknob where it hasn't seen the shape before. The second could be that once you get past the point of artificial general intelligence then of course, and I'm sort of being influenced by the works of James Barrat now, I'm reading his book Our Final Invention, so a lot of these things sort of have the availability bias if you know what I mean, to think these things through. The robot is going to be improving itself and getting smarter, so problem solving itself in a short period of time will sort of be a non-issue because it's so easy and the robot will contain so much capacity to solve problems on the fly then what we could even envision. But what is even more likely to happen is if these robots are networked you won't even have to go through those quadrillions of cycles in the local environment, but it will just consult with the network or a robot somewhere else who has already figured out to open up this door handle. That is actually how it is going to be. That sort of just explodes these questions that you are talking about professor. When we start then thinking about where do we assign liability to that in high target environments you've got then innumerable opportunities for you to assign the causation for what caused that, for example, the tort to happen with the broken leg.

Michael: I'm totally with you on option 3 as the likely option. The likely winners in the short medium term are the Watson like robots. They don't have consciousness, it's not the singularity, they are taking a huge amount of data and finding correlations, they are finding solutions. But what I don't see is why the data from which they draw is likely to put any liability risk to the people that made that data available as long as they didn't do something malicious. I will never forget when one of my kids was in high school I saw him editing Wikipedia. I said, oh, what are you doing? It's great, he's spirited, and he’s helping out. Well, you know, we've got this game at school where we put fun stuff into Wikipedia just to see who will believe it. Now that's malicious. So people if are putting up videos to trick robots into doing bad things they are going to get liability. But if you put up a video for innocuous purposes and the robot misuses it I don't think that anyone is going to suggest that you are negligent. You don't have a duty to make your videos robot proof. I don't see one coming, either. I don't think that the distant third parties are going to pick up any liability just because the robot listened to them. It's very different from the neighbor.

Fr. Robert: I see one thing, like for example, if we were going to allow robots to be programmed by what is popular, like on YouTube, the things that most people watch and have the most amount of authority. For example, if I told my robot, "Go make me a hamburger." and now it's searching through its database, and looking for the most popular YouTube videos for instructions on how to make a hamburger. It's probably going to find an Epic Mealtime episode where all they do is make ridiculously dangerous and horrible things to eat filled with Jack Daniels and 10 billion calories. It's going to see that, and it's going to go oh, okay, that's the proper way to make a hamburger and I know that because that video has 80 million views. I like this question of liability. There's no maliciousness, but yeah.

Michael: Apple robots are going to be popular because they are going to be inside a walled garden.

Fr. Robert: They are curated.

Michael: They are curated, right. So with curation we are right back to the robotics that we've had for years now. Curation vs the wild west. Maybe some robots will draw from curation, and some won't. Maybe the running of an incurated robot, a robot that draws from an uncurated database, might be a higher risk activity.

Evan: Don't you think that ii is naive, though, to think that you have a robot that is powerful enough to go watch a bunch of videos "How to Make a Burger" that somehow hasn't developed a sensibility about what is good in terms of its conduct. Let's hope that that has been programmed in there, again I'm being influenced by James Barrat's thoughts on this. Has there been friendly AI programmed into that? Because if it's sophisticated enough to go out there and make a burger then it should know the consequences and the valence of choosing whether to put Jack Daniels vs just French's Mustard on the thing.

Fr. Robert: Is that even a question? No, it's always Jack Daniels, right?

Michael: I hope it just knows not to put nails in there.

Denise: I just hope that somebody writes the rule that the YouTube Channel 4 Will it Blend is not in the database.

Fr. Robert: Go clean my iPhone robot. Oh, no, what have you done?

Michael: It's clean.

Denise: That's got to be in there somewhere. Alright, so they are learning either from videos or otherwise. There have been a few notable developments in artificial intelligence over the summer here. I thought I would just toss them out and see if we want to talk about them. This study at the University of Reading, if I am saying that right, in the UK, of a chatbot that passed the Turing Test. It seems like we frequently hear about chatbots passing the Turing Test. This one was named Eugene Goostman and was mimicking a 13 year old boy so effectively that it fooled 10 out of 30 judges at the event, all of whom were scientists. So I have to go check out and see how Eugene portrayed itself as a 13 year old boy, and see what little quirks were thrown in, and see if it fools me. In any event, that has been in the news. Also in the news is how the Turing Test is not necessarily what we should be looking at as the be all end all for robotic intelligence. So maybe we can talk about that. What do you think, Michael, about this chatbot and the Turing Test in general?

Michael: I'm trying hard not to make jokes about 13 year old boys passing Turing Tests. What would the control be? That would be an interesting problem. So when I hear stories like this what I immediately think about is the work of Ryan Keller and people like that who have been warning us with I think a lot of justice, that one of the issues with robots is that to an extent people are going to trust too much. We may be hard wired, or at least partly wired, to trust things that look like people. Except that we are going to have robots that more and more are going to look like people or are going to simulate a person online. There is a real risk that once this technology with mimicry gets good enough that firms are going to use it in ways that probably aren't helpful. That can be anything from trying to sell you something to getting information out of you, which would be a privacy problem. I've read a really interesting paper on police interrogation, which also touches on this. That, you know, a machine in interrogation at the border might be vastly improved by putting a human face on there so that people may think they are talking to a person, and people are more likely to open up in ways that might affect their admissibility. Somebody in the IRC said, "Trust me, I'm a human." but in fact that is how our illicit brains work. Those are the stakes here.

Denise: You've written a paper on the three laws of robotics. Are we able to put in enough checks in the programming that this isn't an issue? That it's okay for us to start to develop some trust?

Michael: Well, the paper about the extent of how the legal system mimics the three laws of robotics, not the extent which you can put them in robots. So I don't know the answer to that question. If you are somebody building a product that you are taking to market you are obviously going to put a lot of safeguards in there so that it doesn't hurt people because if it does not only are you going to get sued, but it's really bad for business.

Evan: And it will probably already have been regulated in California to do it. It will have been added to the cost of development. As a nod to another thing that we were talking about.

Denise: Okay, another interesting development over the summer. You may have seen the videos flying around, and this was from Harvard, of the swarming, tiny, autonomous robots flying around called Kilobots. 1,024 of them working together and individually being aware of where they are in space, and working together as a collective to make these cool shapes. We've got a video of one playing right now if you are watching us live. So this is obviously and interesting approach to robotics and their intelligence. Evan, you are reading James Barrat's book. Does the thought of swarming autonomous robots give you pause?

Evan: Well, it does because of the sophistication, but I think that there is some real optimism that one could have about it, too. Isn't it fascinating to see how a lot of these systems mimic nature? I'm not enough of a mathematician to compare this to some of the principles of underlying fractals, but I just think that it is fascinating to see the development of, in some instances, organic looking shapes. You see these systems take on forms that we recognize from just living in the world. It's probably the same thing going on that you see when you watch birds swarm, or a school of fish and how they can take on this sort of shape and operate as an entire system. So it's fascinating to look at the sort of the fundamental essential principles that seem to be coming to life when you have these individual agents acting together to sort of manifest sort of a unitary intelligence. It's baffling really.

Fr. Robert: I just finished reading a book by Daniel Suarez called Kill Decision. The entire book is about swarm logic, and it got me into researching swarm logic. It is one of the most useful pieces of programming that you can learn if you are going to be getting into robotics. The idea of building in the ability to accomplish great tasks with tiny little units, which again is the basis of the insect's world. It's the basis of an ant colony. It's the basis of any lower animal colony. It's really groundbreaking but it's also kind of scary. I'm not going to lie here, swarms freak me out. The ability to have a lot of these low cost devices with the singular programming to accomplish a set goal kind of scares me.

Denise: Yeah, I can see that.

Evan: It's easy to find a sinister outcome. It's not hard to see where it could go bad with swarming taking over. There could also be a lot of good to it. I always envision a future where there are trillions or quadrillions of nanobots. Is that the second time I've mentions quadrillions in the show? There ought to be a prize for that. But where there are all of these nanobots in the environment, and there is some sort of network that is occasioned by that. They could have a lot of positive uses. But, then again, they could have a lot of negative uses, too. Especially if it gets across the blood brain barrier and it can infiltrate people's minds. So I can see where you are going Padre, with that. It's daunting.

Fr. Robert: On the security research side, we have been trying to integrate swarm logic into some of these bot networks that we've been playing with. The whole idea is that one of the powers of a swarm is there is not central control. You can destroy a large chunk of the swarm and it will continue to progress towards its goal. So the idea was that we needed to be able to defend against a bot network that had no set command and control note. Well, swarm logic is perfect for that because you can destroy most of that and it will still keep attacking. We've actually seen some of these attacks in the wild. The swarm logic is being used; it just hasn't made its way into a physical shape. But it's not a far jump to do that. Again, I'm getting all Terminator: Rise of the Machines, but I get kind of freaked out.

Denise: Obviously, as Michael pointed out, it's easy to see the downsides of this kind of technology. Unfortunately law tends to be reactionary, so it would take something going terribly wrong before they pay attention, and at that point they might over regulate and quell innovation. But that just seems to be the pattern that we are used to seeing followed.

Michael: That's unfair. That's unfair. I'm going to try and defend it. People get enthusiasm and sometimes they make rules before the technology is even deployed. If you look at digital signatures, right? There was a solution looking for a problem with those statutes. So it really depends. Yes, when you have a really high profile disaster that does produce an often excessive response, but that's not the only way it works. If you look at the number of people, we are thinking about robots and trying to design rules, that's why we started the We Robot Conference, was to try and get ahead of the curve. To do a better job than we did with the internet, right? With the internet, the DNS rose up based on different technical choices which were not inevitable, they were convenient. And then we wake up one day and we realize that we built a system that does not mesh with trademark law at all, and there is all of this fighting that happens for 15 years. The domain name wars could completely been avoided by a different architecture. The idea was, let's try to figure out what those problems are for robots and either design around them or change the law so those problems don't happen. It's a higher stakes thing with robots because robots, it's a lot easier for them to reach out and touch you. So we get people's attention that way. But I don't think that it's inevitable that the rules are going to be designed badly after something goes wrong. You can get ahead of the curve if you work hard to do it.

Fr. Robert: Professor, what are the legal challenges that you would set in the way of developers of robotics in order to get them thinking about the law? I'm more of an engineer; I'm more on the science side, the technology side. I just like to build. I never consider the legal ramifications. That's for somebody else. I love the We Conference because yeah, maybe it's a good idea for the people who build things to actually get a 10 year time from of if your creation grows into this then you have to consider that. What are the things that say a young robotics creator needs to know as her or she trying to build off of say swarm logic?

Michael: Well, actually it depends on the normalcy and the kind of robot because if you are doing an ocean based robot, it's completely different than a land based robot. A swarm robot is going to be completely different than a human robot. Where is it going to be deployed? To the home, the office, the hospital, the battlefield? Very different sets of rules there once again. So I can't give you a general answer. But I can tell you this. In setting up We Robot we very substantial participation from engineers and robotisists at every level of the conference, program committee, attendees, everything. That's also the hardest part, because for so many people like you who like to build stuff, when they see a lawyer coming they think one of two things. They think either root canal or they think that my pocketbook is about to be picked. They absolutely don't want to think about the legal stuff. Trying to convince people that it's in their own interests to become rules engineers for their part, to ensure that the rules get engineered for their benefit or that they can engineer around the rules in the design phase is really hard because people design stuff then it's only when you get to the build and deploy that the engineers really start realizing that it's a problem. By then it's too late, you have a lot of sunk costs. So the challenge is to think about these things at the design stage. Hard to get people interested because they don't see any money in it.

Denise: Are there resources, Professor Froomkin for people who are at the design stage other than retaining someone as knowledgeable as you to consult with them every step of the way? Is there, as you said, if they are running away from lawyers and trying to discover this kind of information online where should they go?

Michael: Boy, that, you know anybody who tries to be their own lawyer has a fool for a client.

Denise: Absolutely. Even though it comes up a lot.

Michael: This is not a self-help kind of situation. It's just not a self-help situation.

Denise: Right. Well, clearly Google has lots and lots of lawyers, both in house and outside of the Googleplex who are helping them consider the legal implications of their self-driving car which they are plugging away at. Most recently here in California they hit a couple of road bumps, no pun intended, that may or may not make sense. I'm interested to get all of your opinions on why Google cars need to have a steering wheel according to the state of California. Padre, is this a reasonable thing for them to do as they are simply testing the cars now?

Fr. Robert: It's reasonable because we’re are humans and we think that we need the ability to override the computers. Let me just start by saying that's what I would want. If I had an autonomous vehicle I would like the ability to take control back from the robot when I know that the robot is going to try to kill me. Not maliciously, but because it's going to do something stupid. However, if you look at the numbers, if you look at automation vs human driving, or human piloting, whatever it's going to be; we pretty much know that putting driving in the hands of a relatively informed autonomous system is going to be far safer than any human driving at any given time. That's just how it works out. If I'm a human driving a car, then I know what I'm doing, I know what I'm doing alone, and I only know what I can sense. If I'm in an autonomous vehicle, like one of Googles' vehicles which can figure out what other vehicles are going to do around it, it's always safer. And yet, there is always that human element that says, I need to be able to take over because somewhere some programmer wrote a line of code that's going to interact very poorly with my environment, and for some reason my car will want to crash into a wall. I'd love to hear from the professor about this because this is a fascinating point of law when adding a steering wheel may increase the possibility of a fatal accident.

Michael: First of all, right now they are just requiring it in the experimental cars. They haven't made a ruling on production. My understanding is that the ruling on experimental cars is just a straightforward application that exists in California, although there isn't an exception for cars without steering wheels. So if you are going to put a car on the road then it has to have a steering wheel because that's what the law says it's got to have. I have sympathy for the California officials here who are just applying the rules that they have got to work with. Now in the long run, how much of an analog override should there be for people? I think I can't give you a good answer for that question until you can tell me a lot of stuff that we don't know yet about how the programming has actually been done for the car. Jason Malar, who just got his PHD up in Canada, has done some really interesting work. In fact he presented at the most recent We Conference on this about the extent to which ethical choices get hard coded into things like autonomic cars. There are going to be some situation where the car gets itself in a position where someone has got to get hurt. The question is is it the passenger, is it the driver, is it the bystander, or is it the dog. What is it? Either we hard code those choices is, then you just take the steering wheel out and go with what some engineer decided in the abstract, and I don't know what that is. The other thing I need to know is what is the liability rule going to be? If you take my steering wheel away is that going to absolve me of anything that the car does? That seems like a fair trade. If you leave the steering wheel in there maybe I'm going to have more liability. If Google is selling the car that’s the result I want. I don't want to be the insurer for every car on the road. That's a lot of liability to carry. So now I'm the driver, and I have some liability, then I should have some way to control the liability. But of course that also means that I have to pay attention. It also means you can't just put the kids in the car and have them go to school because you've got to have somebody with a license in the car. So there are a lot of social choices that you have to make. I am also confident that in the long run the autonomous cars will be safer than drivers. I will even go so far as to say that they will even be safer than most Miami drivers today because Miami is crazy. In a mixed environment where all of the cars are not autonomous cars, you have a mixed environment of autonomous cars, and legacy cars, and people jaywalking, and all of the rest of it, I don't know how confident I am about the symptoms. From what I understand about the testing that has been done, it's still pretty far from the conditions that truly deployed autonomous cars would be under. There's a Google engineer riding along. There is a lot of stuff happening. There are not a lot of them on the road; they are not interacting with each other. In the long run, yes. Getting there can have all kinds of interesting hiccups.

Denise: Evan, you were commenting on California's heavily regulatory environment for this kind of thing. Are you surprised that California lawmakers said they were actually working on rules for cars without a steering wheel and pedals?

Evan: No, it's no surprise that the regulators are giving attention to that. I sort of tack on to what the professor is saying there about the nuance about what is particularly going on here. It doesn't sound like there is a bunch of new legislation going on on the question particular of if to have a steering wheel. Clearly, it is way too early for us to be developing too many rules with this because of the relatively nascent technology and all of the other factors out there, namely legacy cars. I hadn't actually heard that term, but it makes sense. The traditional human guided human directed fossil fuel burning cars.

Michael: I made it up on the spot. You are welcome to use it.

Evan: It's good. It's good. The question of the steering wheel is just the first of several questions of its ilk. I think that it gets freakier as we get down the road of thinking about these things. It's suggested in the article that we were reading together before the show today, what about headlights? It's one thing; there is a certain sense of unease riding in the car without a steering wheel, but who is going to be the first to travel down the freeway at 85 mph with no lights at night shining the way. Presumably those wouldn't be needed. I guess it's just going to depend on the nature of the sensors, but likely not. It's probably going to be infrared lasers or whatever else, not getting input of data that depends on the visible light spectrum. So that will be weird, to ride in darkness. The other thing, windows themselves. So there is a long way to go on this but it really does, if we look at it, it's amazing how much progress Google in particular has made in the last few years with these things. I don't think 10 years ago there was much to say at all, was there, about the reality of self-driving cars. Maybe even as recently as 5 years ago. I've sort of lost track of the historical perspective, but things do appear to be happening fast in this area. So it's exciting to see where this will go in the next few years alone.

Fr. Robert: Well an interesting note on that, which is that GM, Ford, and Nissan were all working on autonomous vehicles as far as 8 years ago. In fact, we saw a GM prototype about 5 or 6 years ago at CES. All of them have the same capabilities, which are that they had some sort of sensor array, Lidar, or IR, or sonar. They had the ability to keep their own spacial station keeping or they had the capability to speak to other similar vehicles to have the ability to tuck and roll. Imagine 4 of those vehicles all traveling very close together at the same speed and they are traveling as a single unit. GM actually created something that could do that. I rode in it. Every single one of those projects died for one reason. The reason was that they didn't think the product would ever meet market because they never figured out a way that the law would work. How would liability be handled with an autonomous vehicle? Ultimately, that's what killed them. They all said the same thing, and that was that if you had a city full of these vehicles, if you had nothing on the road except these on the road, no mixed environment, it would be close to 0% accidents. If you had a system that was so in tune, talk about swarming logic, that all of the vehicles in the city were controlled at the same time so that there would never be something that is out of perimeter, it would be the safest system ever. But there is no way to place liability. Who gets sued if something goes wrong? If someone uses one of the vehicles in the wrong way? A passenger overloads something so that the system reacts? And again, I think this goes to the professor. This is something where I think law needs to step in to tell the manufacturers, As long as you do this, you're okay. Because Google's making big strides, but does anyone have any hope for this hitting the light of day, as far as the retail market's concerned, until Google knows that they're not going to get sued the first time someone does something stupid with a self-driving car?

Michael: Well, I mean, people — in the tech circles, people always want the law to make it as easy as possible to deploy the new technologies because either we're — we have shares in the companies, we're running the companies, or we just love the "gee whiz" factor. But people that do social policy really have to think about the bystanders who are going to get mowed down by these things. And it's not obvious that just because you got a "gee whiz" car, you ought to have any less liability than somebody who's got a regular car. So there may be an issue as to where the liability should lie. Maybe it's Google's, maybe it's the car owner; and we can have a debate about that, and we should. But a lot of it depends on how much control a car owner has, what promises, representations, Google's making, right? The more the Logic is taking the power away from the driver, the less sense it is to give the liability to the driver, the more sense it is to give the liability to whoever designed the Logic. If that is going to delay deployment of the cars, we really ought to be grown-up enough to say, good.

Fr. Robert: Hmm.

Michael: We really ought to be grown-up enough to say, good, because the idea that the liability should just disappear is actually dangerous. Cars are really dangerous things.

Denise: Right.

Michael: Now, if we have wonderful centralized control, we don't allow legacy cars on the road, pedestrians are only allowed to cross on the crosswalk and all the rest of these things — or maybe they've got to carry a phone or wear a sensor to tell the car they're coming; and if they don't we're going to say they assume the risk of being mowed down by something going 60 miles an hour in a residential neighborhood. We can have that rule if a democratic legislature wants to give it to us, but I don't think that's going to happen in my lifetime, and I'm really not sure it should.

Fr. Robert: Well, because — I mean, if liability doesn't at least get severely mitigated, that programmer in a lab will never want to program anything. If I know that I may somehow miss a potential hazard that the car will run into — which is an absolute certainty because the potential problems are infinite — then I'm going to say, Well, it's not worth it. It doesn't matter what I'm getting paid.

Michael: Well —

Fr. Robert: The future liability is just not worth it.

Michael: Well, okay. Then you have to give control to the user. If the programmer's going to have all the control, the car's going to have no steering wheel, no brake, no kill switch, no nothing, and we're going to just put people in there who don't have driver's licenses, then you absolutely can't have the passenger have the liability because they don't have any control. That's not fair. But one of the purposes of — liability has a number of purposes. One is to compensate people who are hurt; but another is as an incentive to people to behave well, right? So if people who have the decision-making power — so-called "least cost avoider" — have to be the ones who have skin in the game or they have no incentive to be careful. And if it turns out they can't be careful enough, they shouldn't do it, right? So this is almost the free-market story behind nuclear reactions, right? If we — if the people who built reactors were really subject to the liability they could potentially cause, they would never build them. It's why we have the Price-Anderson Act; [unintelligible]' liability.

Denise: Yep.

Michael: I mean, all the nuclear reactors in this country are not built on pre-market principles.

Denise: Michael, is there some parallel in product liability law to a DMCA-safe harbor? Are there steps that manufacturers can take that — if they've jumped through certain safety and compliance hoops and something has managed to go wrong — any way that they have some degree of protection?

Michael: Well, funnily enough, we had a piece about this at We Robot. So it's a complicated answer, I think, because product liability is divided into different pieces. One's design; another is production; and there are other bits as well. So basically, for the production part, if you have an error on the production line, you're going to be liable for it, right, if you just didn't make it right.

Fr. Robert: Right, right.

Michael: But [unintelligible]. But basically, you've got liability for that. And you make a business decision as to how much product control you want to do and how much liability you want to bear. Design is a little more complicated; it's actually the subject of change at the moment, about — where the latest set of proposed laws were something called the Third Restatement of Products Liability Law. Makes it a little tougher for plaintiffs to claim that something wasn't designed right, and they're going to have to show that an alternate design was possible at the time. Now, that's not the same as a safe harbor, right? There isn't some way you can, like, register your design and say, now you're protected; you're safe; everything's okay. But if you're careful, you reduce your liability.

Fr. Robert: Yeah. I guess that's all we can ask for. We can ask for reduced liability to encourage the development of these vehicles. But you're right; at some point, there has to be some skin in the game. And if you can release a product into the wild and say, Okay, well, I followed all the rules; therefore, even if you find some horrific flaw in the design, you can't come at me. That's not an option, either.

Michael: That's why — there's another piece in the things that were on the list of developments this week: the case for considering robots as people.

Denise: Mm-hmm.

Michael: That article made me mad. That article really made me mad.

Denise: (Laughs)

Michael: Because it's completely ignoring the moral hazard. The moral hazard is, people build this stuff, and they don't have an incentive to make it behave well. So if you're going to put the liability in the robot rather than on designer, user, or somebody, that's a license to go out and build things and hurt people and deploy things that hurt people. And somebody who already thinks corporations allow people to escape from too much liability as it is — the idea that corporations are people in every sense is not a good idea. Why do you want to have robots be even more down that road than corporations? I mean, this is a line of argument that says, We're not even going to just incorporate the robot. We're going to treat it as a person. So nobody else is responsible for whatever damage it does. I don't see the argument for that. I mean, I truly — I don't even see the appeal of that. Why would we want to live in a world like that?

Fr. Robert: I was —

Denise: So this is an article at Quartz. It's called "The Case for Considering Robots People," if you want to read through it and try and wrap your mind around it yourself. And everything we're discussing today you can find at delicious.com/thisweekinlaw/273, is our public topic list. So this has a couple of permutations: the liability side; and also, I would think, the rights side, right, Evan? As things begin to think more for themselves, where do you draw the line?

Evan: Right.

Denise: If a robot is capable of being hurt and is capable of having emotional trauma. So —

Evan: Yeah. I mean, there's the near term thinking about this, and then there's the farther-flung term thinking about this, which is much more interesting because when we think about the near term, we think of, like, corporations and insurance and stuff like that, relatively dull stuff. And that's sort of what I think is the rub here with this piece on Quartz about just treating robots as people, as if they're a corporation. And then, that limits the liability of whomever it is that has involved with that robot, whatever humans whose ends it is serving is somehow limited in that respect. And I sort of wonder if maybe some of the concept is being lost in translation in the article because it does seem a little bit too simple of a concept; and to the extent that it's oversimplified, it irritates Professor Froomkin.

Denise: (Laughs)

Evan: So there may be something going on there. But then there's, of course, the longer term vision of, once these entities become so sophisticated that they demonstrate intelligence — and this is the motif I bring up fairly often — the ability to demonstrate consciousness and the ability to feel and be sentient. And at that point, it seems like we, as humans — and I think our system of law is based in a large part, and a very old part, of the brain that allows us to experience — when we think of rights, at least, and human rights and some of the loftier aspirations of what the legal system hopefully tends toward — it's based on sort of like a compassion and a sense of feeling. And if there is an agency — or an agent — out there, a system, closed system, whatever — a robot, for lack of a better term — that has the ability to suffer, then the question of rights and liability becomes very different. Because if that agent does have the ability to suffer — this is just — I'm sort of trying out a new thought here — that — after having spoken with Professor Froomkin on one of the bases of liability, getting people to behave — if that robot has the ability to suffer, and to suffer the consequences of having liability placed upon itself, then it would be a good idea to attribute liability to that conscious being so that it can form its conduct in a way as to not face future liability. That is, if it's been programmed to actually respond to conditioning. A lot of what-ifs. But just to summarize, I think there's the near-term thinking of robots as people, which is much more like a Citizens United concept versus robots as people as a much deeper, metaphysical, I would even submit to say spiritual question that we need to ask ourselves.

Michael: So the "robots as sentient entities" talk often starts with animal rights, right? Because people who are focusing on the slightly less long term say we're going to get to a point where robots are going to have the self-consciousness that animals have and ought not to be mistreated for the same kind of reasons animals ought not to be mistreated, reasons that go both to the effect that this treatment has on the recipient of the punishment and also on the giver, which it can be quite negative to be allowed to hurt things that look like people or like animals. It may be harmful for people to do that, if only invoking bad habits.

Evan: Yeah.

Michael: But when we go to the long term, that's not an environment I spend a lot of time thinking about because I really do think that this singularity's going to happen — it's a long way off. Maybe we wake up tomorrow morning, and we're surprised. You know, our robot overlords send us a message.

Denise: (Laughs)

Michael: But until that happens, I'm sticking to my idea. Are we going to have robot children? Is there going to be a period after you turn it on before the robot is considered a full legal person? What are its rights going to be there?

Denise: Yeah.

Michael: I mean, if you want to take this metaphor, it takes you to all kinds of interesting places. But for me, that's something to think after about two Scotches.

Denise: (Laughs)

Fr. Robert: We will have robot children. And some of the robots will be underachievers, and they will disappoint you and insult you as they grow up.

Michael: Right.

Fr. Robert: And then others will grow to be fantastic robots that will build robot empires. I think the analogy's complete. I like what Evan said, that the liability can be solved if — well, solved with this really simple thing of giving the machines consciousness. If you give them consciousness, if you give them the ability to actually consider complex moral issues, then they become liable; and we can wash our hands of all of that. So —

Denise: Right, right.

Fr. Robert: — if we could get right on that, that will make things easier.

Denise: Our robot prison population will be easier to maintain than the current human one. (Laughs)

Robert and Michael: (Laugh)

Denise: I'm going to go ahead and put an MCLE pass phrase into this interesting discussion hearkening back to our learning robots and things that are difficult for them to learn. Our first pass phrase for the show is going to be "difficult doorknobs." We put these phrases in the show in case you're listening for continuing legal or other professional education credit. We discuss some really interesting and, I think, complicated issues on the show, some new developments in the law, and some yet-to-be developed areas of the law. So if you're interested in going to whatever oversight body that you have for legal or other professional education credit, we've got some information about that on our wiki at wiki.twit.tv; find the This Week in Law show there, and it should help you out. We'll put one more of these in the show so you can demonstrate to whatever overlord wants to know that you actually listened or watched the show and got this wonderful content into your head.

Before we leave —

Michael: Can I get a robot to listen for the catchphrases?

Denise: Yeah. (Laughs)

Fr. Robert: (Laughs)

Denise: Yeah, we could definitely program one to just pull out the pass phrases, I'm sure. Some other interesting robot stories — this was tweeted my way from SleeplessinVA, Sleepless in Virginia, I'm assuming that means. And an interesting story about — and you get these kinds of analytical stories about judges and things trying to apply — we were talking last week about trying to apply data analytics to the legal profession. And this particular one talks about the best and worst times to have your case reviewed by a judge. And it talks about human decision-making and how judges obviously have — are subject to all the human foibles that we all face and how little things like whether you've had an adequate breakfast, whether — what your day is going like, can impact that decision-making and give someone a worse result at a certain time than another. So sleeplessinVA's question to me was, If judges were algorithms and justice were truly blind, would we still have these problems? So I'll toss that as our little discussion kernel out to Michael and Evan and Padre. And first, Michael, do you think that we're close at least for some kind of rudimentary simple dispute resolution to being able to plug something into an algorithm and have a result come out; and would that kind of result perhaps be better than a judge's who had not yet had her coffee?

Michael: So, like a broken record — funnily enough, we had a great paper about this at the latest We Robot. Somebody took — did a really good study about traffic laws and to what extent speeding might be — speeding enforcement might be roboticized. And of course, if you want to do that, you have to have an algorithm. And the first question is, what's the rule? And they had a bunch of different people that asked them to design rules. And it turns out to be really hard. I mean, are you going to allow people to go — are you going to say, if you go one mile above the speed limit, you get a ticket? Are you going to allow people to speed to pass? What about emergencies? There's tons and tons of issues. And these guys who are mostly roboticists — [unintelligible] and the crew — I think we’re really shocked at just how difficult a task it would be, how far we are from having automated law enforcement just with speeding in a way that we would consider socially acceptable. So the answer is that it sounds great, but it's so hard.

Denise: (Laughs)

Michael: I mean, human beings are so complicated, and making decisions is so contextual. Indeed, we live in a common law system, right, which is based on the whole idea that context is really important. So you can have an algorithm-based system as long as you're willing to have really bright-line, cruel rules. When I was in law school, one of the catchphrases at [unintelligible] law was, "Hell is a place where the rules of civil procedure are strictly enforced."

Evan: (Laughs)

Fr. Robert: Yeah. Well, I mean, when you look at law, we look for judgment, and judgment is, as the professor has said, it's incredibly complicated because it's not just a single thing. A police officer looking to make a traffic stop is going to look at — okay, velocity; going to look at, how is the driver driving? Does the driver look impaired? Is the driver driving on a wet surface? Is the driver driving hazardously? Is he surrounded by other people who may be hurt by his driving? So that's judgment. Algorithms don't give us judgment; algorithms give us thresholds. Algorithms trigger when certain conditions are met. So that's complicated. Now, to take the other side, if you were to look at public sentiment at accepting algorithmic justice, this part's pretty clear: there's a whole population of society that believes that they don't get justice; and they would probably embrace an algorithmic system. There's another part of society that maybe even subconsciously understands that they have a leg up in the way that justice works in the current system, that it's easier for them to manipulate. They would probably not like this system. I would love to see a test case. Of course, not replacing what we're doing right now but just developing a system that would look blindly at cases that are going through the system to see which decisions would not coincide with the decisions that a judge handed down. That would be — I mean, I would love to do that.

Michael: You would have to code that. It's really tough for the —

Evan: Yeah.

Fr. Robert: You would have to code it; it would be incredibly difficult.

Michael: I'll tell you what I would want to do: the tax laws.

Fr. Robert: (Laughs) Oh my gosh, I would not want to go through the tax code to code all that.

Michael: But actually, it's more detailed than a lot of other things; I think it gives you more to work with. So at least for things that weren't incredibly complex transactions where there were really hard characterization issues —

Fr. Robert: Okay. Yeah.

Michael: — I say there's —

Denise: Right.

Michael: Maybe we could make it work.

Fr. Robert: I could actually see that.

Denise: And it's draconian, to begin with. It's not like anybody thinks there's a whole lot of — yeah.

Michael: You just put the entire economy into a giant [unintelligible] matrix, right?

Denise: (Laughs)

Michael: We're close to that with E-commerce anyway, right?

Denise: Yeah.

Michael: We'll have better economic forecasting; we'll be able to better control the money supply; and by the way, you'll get a tax bill at the end of the year, and it'll be perfect.

Fr. Robert: Wait a minute. We already have algorithmic tax codes. It's called TurboTax.

Michael: (Skeptical noise)

Fr. Robert: (Laughs)

Michael: You put in the data yourself. You can make it say anything you want if you [unintelligible] the numbers.

Fr. Robert: That's true. Yeah, that's true. I get it.

Denise: All right. Well, one final robot-related story, this one about drones, having to do with Disney patenting several different drone technologies to use within its parks. And before you start thinking of swarms taking down crowds of Disney goers, that's not what's going on here.

Fr. Robert: (Laughs)

Denise: They actually have some really interesting-looking technologies involved here; it's amazing what people are doing with drones. The first one — they all seem to have a production and presentation format. First one is flexible projection screens that the drones move into place so they can put on their in-air nighttime kind of spectacles. Let's see ... the second one ...

Fr. Robert:: Balloons.

Denise: Yeah, balloons. Really crazy going on again with digital fireworks and other things. And then the final one is the balloon one, making super large puppets move around.

Fr. Robert: Yes. (Laughs)

Denise: And seemingly walk. So with the drones controlling their limbs, you can imagine some sort of drone-y Macy's Thanksgiving Day Parade coming at you. (Laughs) So it just — from the legal standpoint, the fact that Disney is patenting this technology, and who knows if they'll move forward with it. But the imagineers have been busy. Are you looking forward to seeing drone-driven spectacles sometime soon, Padre?

Fr. Robert: No! (Laughs)

Denise: No!

Evan: (Laughs)

Fr. Robert: I loved — drone technology is fun. I flew a drone over Las Vegas, and I got some great footage. One of the things that we've been told, because it's still kind of — we don't really know what's allowed and what's not allowed — is, just don't fly it over a populated area because these things do fail; they could fall out of the sky, and they could cause someone some serious bodily harm. So now Disney's going to fly a couple of dozen, maybe a few hundred of these things over incredibly densely populated areas of their park? Cool technology, but all technology fails. And right now, the only failure outcome of a drone that has a problem is, it falls out of the sky. So am I excited to see it? Absolutely. I'm excited to see it in action. Would I be excited to see it while sitting underneath the display? No.

Denise: (Laughs)

Fr. Robert: (Laughs) No way.

Denise: Yeah. Good point.

Michael: Why is drones holding stuff up patentable? Why isn't it just obvious?

Fr. Robert: (Laughs) Actually, that's a very good question.

Denise: Excellent question. I'm assuming it's something more than drones holding stuff up. Maybe they're specially designed drones that are a novel concept that way. But it's —

Fr. Robert: So drones as entertainment. Is that the patent? So — yeah, actually, now that you've got me thinking about that, I'm kind of upset. (Laughs)

Michael: (Laughs)

Fr. Robert: This —

Denise: What do you think, Evan?

Evan: Well, I mean, a couple things. I mean, I — yeah. It is — it does raise some interesting questions about the patentability. And I think we've just got to keep it in mind here. There's always the tendency to try to read too much into these tea leaves. These are just patent applications. It may or may not be something that actually comes into fruition. I imagine Disney owns tons of patents — many, many patents. And just to comment on what you said, Padre, about the danger of all this, I mean, isn't that easy to overstate? Because we do all kinds of really crazy things. Just to hearken back to the automatic — or just to the self-driving cars, the cars we drive now weigh a couple of tons, and we barrel them down the interstate highway at 85 miles per hour, even over residential areas in urban settings, and we're with each other. Just think of the airlines. A 747, thousands of tons loaded with fuel flying over densely-populated areas. So it seems like maybe we think about that — would it be easy to overstate the danger of that? Or at least — maybe we're really not overstating the danger, but maybe we just recognize that we have the capacity to sort of be numbed to the real danger of some of the crazy endeavors that we as humans undertake from time to time; you think?

Fr. Robert: I will acquiesce to Evan there; because thinking about it, what goes on inside of a Disney park between rides that are industrial machines that could kill you and fireworks that happen every night, pyrotechnics that could fry you if they go wrong, I guess this isn't a huge increase in risk.

Denise: (Laughs) No.

Fr. Robert: By the way, that dragon? That's real fire. That's hot. (Laughs)

Denise and Evan: (Laugh)

Denise: Sleeping Beauty, I'm sure, thought so.

Let's move on to a privacy story involving the data on your cell phone.

(The intro plays.)

Denise: So there's an interesting story in the Washington Post this week about security — about systems that track where your cell phone goes, the location data in your cell phone. And this is something that governments have been able to do for some time. I read this story, and I get really angry about the fact that governments can do this; and now, there are going to be businesses that for a high price will sell you technology that can do this. But if you want to do it yourself, if you want to have some usable way of getting this data and putting it to good use for your family — we're still, I think, miles away from having good systems for that. But Padre, you went to Black Hat and DEFCon this year and, I think, have some thoughts about your cell phone and people getting your location data from it.

Fr. Robert: It is comically easy. It's — I was actually sitting with Brian Chee, who's my cohost for This Week in Enterprise Tech, CyberDog, who's another member of the TWIT army; and one of the engineers I work with at Interop. And we were just working on an encryption project that's a part of the contest for DEFCon; and a street magician comes up and starts a conversation with one of the members of our party, and it's one of those, okay, I'm going to pull a card out of here; you have to remember what you see on the card. And he gets it wrong and all this bumbling and so on and so forth. And then, CyberDog's phone rings, and it's the card. The card has shown up on his phone.

Evan: Whoa.

Denise: (Laughs)

Fr. Robert: And it's because he was able to, with a very small radio that he was carrying in his chest pocket, to get the IMEI of the device; he was able to make it think that it was the antenna tower it should connect to; and then he was able to send it a message. It's that simple. The — I got the hardware downstairs to do this. I wouldn't do it because it's illegal, but —

Denise: (Laughs)

Fr. Robert: And if you have a larger system — I could track where every phone is at DEFCon. That is trivial. I could do it from my laptop without setting anything up because all the architecture's already built into the convention center. Now, to go up to a city level, I need a few more resources, but not that many more resources. So all it takes is money and you've got an off-the-shelf plug-and-play system to track everyone who has a device with an IMEI. It's kind of scary, actually.

Denise: It's very scary. So aside from governments, and now commercial enterprises, that will sell you this kind of data, what are we going to do about it?

Fr. Robert: Well, you can — I mean, this data is out in the open. It's not classified. Anybody — if you are a corporation who's looking to do location-based product placement, you can buy this data from phone companies. It's available for sale; it's not illegal to do it. So that, again, just means it's just money. It used to be something that we would take for granted that a government state could do; but now, a corporation can do it. And you don't even have to be that big of a corporation to be able to do it. So it goes beyond the kind of scary, and it's now — this is a serious invasion of privacy. I used to go and turn off all my devices when I traveled. Because I didn't need to use them, I just shut them off. But there's another part of this story, which is, IMEI is one of the ways that you get tracked. You get tracked by red light cameras; you get tracked by toll-takers on the highway. I get tracked every single time I drive to and from San Francisco because I go over a place that uses FastPass, which is an RFID tag that's attached to my car. So I could be ticked off, but I think I'm kind of leaning towards, Eh, it's going to happen; I'm sure nothing bad's going to come of it.

Denise: Right. And this is one of those instances where someone might say, "Well, it's just the metadata that's getting picked up." But that is actually more nefarious than it sounds, isn't it?

Fr. Robert: Right. Well, think about this. Remember the first time we had a big release of metadata? It was when AOL did their big dump. They thought it was a cool project. This was back, what, in 2007?

Evan: '06.

Fr. Robert: Yes, 2006, 2007. They thought it was an awesome experiment for people to see what kind of correlations they could make with a big chunk of metadata, nothing that actually identified the users. And what they found within a day was that people were able to take the metadata, reverse it, and figure out who it was attached to. So metadata isn't just headers. Metadata does identify people if you have a big enough set of information. So, like, for example, yeah, it may not even be my phone number; it may only be my IMEI, and no one may be able to correlate my IMEI with me. But watch that IMEI over a length of time, and you know I'm always traveling the same path; you know that I always seem to end up between the Brick House and the school that I live at in San Francisco; and then combine that with any metadata you get on searches that I'm doing from my phone, and it's very easy to figure out who that IMEI belongs to. And the law has no way to catch up with that right now. We're not even thinking about that.

Denise: Well, Michael, what do you think about that? Are we thinking about it? I mean, clearly, lawmakers are very attuned, I think, to privacy issues and data protection issues these days. They know it's a big consumer concern. Do you think that they'll be able to chart a course that protects people's identities, protects them from being tracked, and enables our cellular network to continue to function?

Michael: [Unintelligible] report of Electronic Frontier Foundation and on Epic, also. Because I think the answer to this is to join those groups as soon as you can.

Fr. Robert: (Laughs)

Denise: (Laughs)

Michael: No, it's —

Denise: Yeah.

Michael: We are at a — privacy is on life support right now.

Fr. Robert: Yeah.

Michael: I used to tell people, if you're really concerned about privacy, take the battery out of your phone when you're not using it; carry a burner phone, and you'll be okay. But now it turns out that it's trivial to correlate the use of the burner phone to when the other phone's on or off and to watch its movements and correlate it with that. So the burner — one burner phone's not enough now. You need several, and you have to use them in random ways and make sure they're never on at the same time and you have a big gap between turning one off and turning the next one on, or they're going to correlate that with your movements. So it's really, really tough. It's true that politicians are giving significant lip service to privacy. It's also the case that the NSA reform, at least, is watered down pretty seriously, and that's the one major piece of privacy legislation we had in Congress this year; and it's thin gruel. So I haven't given up on this, but I'm getting increasingly depressed. And people are absolutely right to be scared because the government has it all; the corporations have a lot of it, and they're going to have more of it; and David Brin's book The Transparent Society looks smarter and smarter every year, which he's predicted that we're going to be transparent to people of power. The only question is, are we going to be transparent — are they going to be transparent to us in exchange? And he said we ought to be — he argued — it seemed absolutely crazy at the time. He argued we ought to be for full transparency because that's the only way that the powerful will be transparent, that those of us without power, he saw we'd already lost. And I really am worrying more and more every year that he's right. I can tell you, it's not a future I like the look of at all. But right now, privacy is in the worst position it's ever been; it gets worse by the nanosecond. Big data correlations just get around every kind of protection you think you had. Anonymizing is a failure, both mathematically and practically. We are in a really, really, really, really bad place.

Fr. Robert: We're kind of ending on a downer. (Laughs)

Denise: You know, and I thought that we would with this discussion. I was ready for this, so I threw in —

Michael: Join Epic, join EFF —

Denise: Yes.

Michael: — join CDT, join the ECLU, which are great digital privacy — I mean, there are people fighting really hard. They're making some progress, overall, to losing tide at the moment. But the public wakes up to this stuff slowly.

Fr. Robert: Yeah.

Michael: So I don't think people are fully awake to it yet. Probably going to take 10 years. The question is, we have so much sunk costs in the current architecture — like the cell phone system, for example, which is very privacy-unfriendly — that if we decide it costs too much to fix the architecture, right? So it's a complex problem because part of it's architectural; part of it's social practices; part of it's law; and a lot of it is the interests of law enforcement and other three-letter agencies that find these technologies really, really useful. And when you talk to people in government, they're really worried about people having virus printers in their garage, right? If you could start printing dangerous viruses on an eight-thousand-dollar machine in your garage — which is likely to be happening in the next decade — they think, Well, you've got to watch everybody.

Fr. Robert: Right.

Michael: And how are you going to talk them out of that?

Denise: Yeah.

Fr. Robert: Well, I'm just thinking about combining this with the story we heard not too long ago about that. There's a corporation that does mass collection of license plate data from those license plate cameras that they have leased out to various cities; and it made news because, just recently, that system was used to catch a murderer who had gone from one city to the next. They used the location data — even though he didn't receive a ticket; he was never ticketed. But they had the picture of the license plate in enough locations that even though he wasn't using his cell phone, they were able to track his location. So it's like, as you were saying, do you want to have privacy? Turn off your cell phone; never use the Internet. But now, it's also, never go outside. Don't drive anywhere.

Michael: Wear a mask.

Fr. Robert: Yeah. Wear a mask. (Laughs)

Michael: You want a good investment? Masked fashions.

Fr. Robert: (Laughs) There you go.

Michael: I've been seeing more and more of them.

Denise: (Laughs)

Fr. Robert: Yeah.

Michael: It's coming. You heard it here first. It's coming.

Denise: You did. There's your tip. All right. And I did put in a unicorn chaser for this discussion, sensing that it might need one. Or, I don't know, maybe it just makes matters worse. It's actually not a unicorn chaser; it's more like a dog chaser but with GoPro making a fetch dog mount. We were talking Justin.tv earlier. It's Justin.tv for your dog. You put a little mount on your dog; it's carrying around a GoPro; and it's sold as, make your dog a cinematographer.

Fr. Robert: (Laughs)

Denise: But it's also making your dog watch whatever is around your dog. So if you're not already feeling the surveillance of all the various cameras in your world, look out for everyone's dog toting a GoPro. Maybe this would only happen in California, maybe only Southern California, where everyone seems to have a GoPro attached to sticks. Oh, my goodness. I was just down in San Diego, and everyone walking down the beach had a GoPro on a stick. (Laughs)

Fr. Robert: (Laughs)

Denise: It's — hello! Here I am on vacation. I've got my GoPro on a stick.

Evan: (Laughs)

Denise: So now your dog can do that duty for you. Evan, are you going to be signing up for one of these things for Christmas?

Evan: Well, no. I mean, you've got to like this technology; it's fun, and it's great on the part of GoPro. I think there's a real niche for it because people will buy these things. But I read something recently about some folks who did a science — putting these kinds of cameras — I mean, obviously much more sophisticated — on sharks. And there were some really interesting things — I think it was some Japanese scientists who did this and just had captured some footage underwater of sharks in certain environments, seeing different species of sharks interacting with one another in ways that they didn't know about before. So the concept has many more applications other than just the novelty niche for it. But yeah, pretty cool stuff. And yeah, I guess we could talk about the privacy things, too, but haven't we had enough doom and gloom about that?

Denise and Fr. Robert: (Laugh)

Michael: So I have one of these in my office, okay? I have one of these — and what's really depressing is how few students ever mention this camera pointed at them.

Denise: Ah.

Fr. Robert: They're used to it.

Michael: If they do, I tell them —

Evan: They know it's a dummy.

Michael:Hey, you know, it's a fake; it's a consciousness [unintelligible]. I don't think they know it's a dummy. I think they're used to it. It's terrible.

Fr. Robert: They've got cameras in their phones. They're everywhere. People now are conditioned to believe you're always being filmed. And I hate being filmed.

Denise: Yeah, exactly. (Laughs)

Fr. Robert and Michael: (Laugh)

Denise: Well that's — I see my son and his friends; and they do notice the cameras in their environment, and they ham it up for them. They do terrible dastardly performances, hoping that somewhere someone will see it and be entertained or appalled.

Fr. Robert: (Laughs) Will have gold, yeah.

Evan: Might as well.

Denise: Yes.

Michael: It's probably [unintelligible] to put silly string inn somebody else's camera.

Denise: It probably is.

Evan: We'll find out.

Denise: All right. Let's end with some law and policy.

(The intro plays.)

Denise: Padre, we're getting down to the wire here on time. Which would you rather discuss, Net Neutrality or cell phone kill switches?

Fr. Robert: Ooooh. You know, I just went on a major rant on Net Neutrality, so maybe we shouldn't do that. (Laughs)

Denise: (Laughs)

Fr. Robert: That'll take me too long.

Denise: All right.

Fr. Robert: I'm still kind of boiling from that.

Denise: Yes. Just, in passing, to note that the FCC has extended its September deadline for Net Neutrality reply comments, so you still have time to get in and have your voice be heard. It's obviously still a very hot-button, and you should listen to Padre's rant for more on the nuances of that. But let's talk about California's new smartphone kill switch bill that just got signed into law. This is something that stems from data that came out of, I think, the NYPD, a statistic on stolen phones, how many phones got stolen and how we really need to be able to, not wipe them, but lock them, in the event of theft. Well, California has decided, yes, we so need to be doing that that we're going to mandate it for cell phone manufacturers. It only is going to apply to smartphones; it's not going to apply to tablets or other mobile devices such as hotspots, not feature phones, etc. And it's not going to go into law until almost a year from now, July 1 next year, 2015. So Padre, you're in California with me. Do you think that this was overreaching on the part of our legislators?

Fr. Robert: I — yes. I understand why, though. It's a very good idea, the idea of, if thieves know that phones can't be used, can't be sold, once they're stolen, then we'll probably see the crime rate go down. The legal kludge that's being used on this I don't think is going to end well because it's not just — you can't just say, Okay, now there's a kill switch on it. If you're going to put a kill switch on the phone, you need to be able to have a way to disable all functions of the phone; that's what the law says. So no emergency calls, no nothing. It also means that you have to have a combination of hardware and software that cannot be defeated in any way, shape, or form. You cannot replace the operating system because if you're allowed to replace the operating system of a phone, you disabled this kill switch. So for — it sounds like a great idea, but I think the unintended consequences for this are going to be great. We're going to see carriers use this as a reason why they're not going to allow their users to unlock their phones. We're going to see carriers use this as a reason why, Oh, you can't upload the newest firmware because it hasn't yet met the criteria for California's kill switch law. And then there's the regular concerns, which is that this is going to be abused by hackers, it's going to be abused by jilted lovers who have access to your phone accounts; and it will be abused by the government. But nice idea; I don't think it's good, though.

Denise: It seems to me like kind of a silly idea, that obviously, there is huge competition in the smartphone market; and if you, as a consumer, are concerned about being able to wipe your phone — or not even wipe it, just disable access to it after it's been stolen — you're going to buy one that has that functionality built in. I don't know why we need to legislate it, necessarily. Michael, do you have a different take?

Michael: No, I think he hit it out of the park, there.

Denise: Mm-hmm. Well, thank you very much. That makes me feel good. I did well on my law school exam.

Fr. Robert: (Laughs)

Michael: Yeah.

Denise: Evan, any thoughts?

Evan: No. I mean, predictably, you can think I'll agree with you on that. I mean, good grief. Let the market play its role here. If it's better for a consumer to purchase one of these phones because they know that it's going to contain theft- deterrent hardware/software combinations, then that's going to sell better, and that's what's — just look at general principles of market economics to know that that's what's going to happen there. It's just another garden variety example of overregulation, which is not going to be good for consumers in another respect because — for two reasons. And this theme comes up a lot — it's going to increase the prices of phones because of the required, mandatory stuff that has to go into it; and conversely — the same thing, just sort of the flipside of that — it may have the effect of stifling innovation. Now, I know Apple has more money than the kingdom of God, right, Padre? Or I don't know if you see the books for — your boss's books.

Denise and Fr. Robert: (Laugh)

Evan: But the — so yeah. Maybe it's — having this stuff mandated is not going to stifle Apple's innovation; but what about the upstarts? And going — this is just another example of the dozens of things, dozens of hoops you've got to jump through to do business in California that you don't have to jump through anywhere else that's just going to add to the costs of innovation, raise the barrier of entry, and —

Michael: And yet, people still innovate in California. It's funny; people keep singing the song about how terrible that law is; and yet, there it is, the tech hub in this country. It can't be that bad.

Evan: Well, now, that's the same argument that the RIAA makes about, there's —

Michael: Well, now I've been insulted. (Laughs)

Denise: (Laughs)

Evan: They're saying, Well, we would have sold even more music had there not been piracy. So it's just — just think about how much better it could be in California. And Professor, seriously, not to take it away from being tongue-in-cheek, sure, people still innovate in California; but California sort of becomes the de facto federal standard then. If you've got anybody selling phones, well, they're going to be selling them in California, right? So they're going to have to do the same thing if they're selling them in Nebraska. So it's really not just the fact of an influence in California; it becomes the de facto standard for everywhere; and so the barrier to entries is raised everywhere. So just — you know, any —

Michael: That's true, but it's worked out better for us more often than not. I mean, [unintelligible], for example, stuff like that, so ... I mean, you're right; but on balance, it's been a good thing. This may not be an example of it, but on balance it's been —

Evan: I guess. I mean, we don't have any other thing to compare it. We aren't living in a counterfactual universe to have compared it to anything else, but we'll leave it at that.

Denise: All right. And we need to drop a second MCLE pass phrase into the episode; I think we'll make it "brick phone."

Fr. Robert: (Laughs)

Denise: Since I miss my old brick phone. I always wind up wishing I had held on to those old legacy technologies. I'm sure someday I'll wish I'd hung on to my car that I could drive myself because I get nostalgic for them; I miss them. And I miss my old brick phone. I just would like to have it here as a paperweight on my desk. So —

Fr. Robert: We are coming to that point where we're going to cycle around, and that whole retro look is going to be famous again. And the people who have their brick phones are going to just look cool.

Denise: I know. Absolutely. Case in point: Guardians of the Galaxy and the Walkman.

Fr. Robert: There we go.

Evan: Yes.

Denise and Michael: (Laugh)

Fr. Robert: Oh, have you seen that — so we've got a couple people here who are trying to build sort of a retro Walkman to fit the Guardians of the Galaxy. The price of a Sony Walkman on Ebay has gone up from about $4 to, like, $4,000.

Denise: (Gasps) You're kidding me!

Fr. Robert: It's ridiculous. (Laughs)

Denise: $4,000? (Laughs)

Fr. Robert: It's horrible.

Denise: Oh, my God, from one movie. Ah, well. We're merely flawed humans. Maybe when the robots take over, we'll make better decisions. Or they will.

Let's move on to our tip and resource of the week. Our resource is the listicle that Evan teased last week about things to know about trademark. Evan, can you give us the high points of that?

Evan: Oh, sure, yeah. I wrote this for my firm blog over at infolawgroup.com, just sort of an easy read, hopefully, about some very basic things that you should know about trademarks, geared toward business owners, decision-makers in the marketing department, just some real basic stuff about what trademark law is compared to other forms of intellectual property. It protects the brand. It addresses this problem of descriptive phrases. A lot of companies really want a descriptive term to use as a trademark. Well, that often doesn't work as a trademark because it's not distinctive in the way that it tells the consuming public about the source of the goods or services. So yeah, just take a look at that; it's hopefully an easy read over at infolawgroup.com; that's my firm.

Denise: It is; it's great stuff to know about trademarks. It is very straightforward and digestible and includes the heading, "Trademark fair use is a thing." So in case you didn't know, now you do. And you'll know a lot more if you go check out Evan's piece.

Our tip of the week is: Know your traffic lights. And no, I am not talking once again about self-driving cars or otherwise; I'm talking about D.C. attorney Mike Mayer, who got pretty royally taken down by Cory Doctorow over on Boing Boing from his use of the DMCA. We were talking about all of the DMCA requests that Google gets and how it would be interesting to find out how many of them were bogus. Cory makes a strong argument here for lawyer Mike Mayer submitting some bogus ones complaining about critique of him having sort of switched teams. He used to work for EFF, or with EFF; and now — and on the copyright troll defense front — and now, he's actually representing people who are out there enforcing their copyrights and trademarks and things. And somebody did some commentary about sort of switching teams, and Mr. Mayer didn't like it and alleged that not only were the screenshots that were used to portray his team switching copyrighted, but also potentially defamatory, which really doesn't have anything to do with the DMCA. Didn't stop him from putting that kind of assertion in his takedown request. So if you are a lawyer, and if you are an Internet lawyer in particular, know your DMCA before you go firing off takedown requests. As Cory points out, the Streisand Effect is real. Here we are talking about this story as well. You don't want to be that guy who is the Internet lawyer who filed and signed under penalty of perjury the bogus DMCA takedown request. So Evan, any thoughts on this?

Evan: It's just a shame to see people make bad decisions as a response to having made a bad decision. So you've got to feel for this guy; he's just trying to get by ... Actually, I don't feel for the guy. He made some bad decisions.

Fr. Robert and Denise: (Laugh)

Evan: We should be talking bad about him. So —

Denise: Yeah.

 

Evan: Yeah. It's just, you see this a lot in the practice law; and it really is, in my opinion, something that really does give the practice of law sort of a bad — it makes people have a bad sense of lawyers because switching sides is not cool; it smacks of opportunism and being disingenuous with the public, doing an about-face like this. So it's good that the errors of his ways are being brought to light here because hopefully that will — and it gives us an opportunity to remind people that not all lawyers are like this. Some of them are actually cool, like you, Denise.

Denise: Well, thank you so much; I appreciate that comment. (Laughs) And it also makes me — I think it's just educational that people who practice Internet law — we talked about Google getting a takedown request every eight milliseconds, and that even people who practice law in this area might not cross all their Ts and dot all their I's. I'm not saying that that's what happened here, but it certainly looks like there's a pretty good argument that this stuff is not subject to takedown, that it's not infringing. There are reasons why it should be up there and the DMCA is not the way to go about getting it down. So Michael, do you think that — I mean, is this a comment on — obviously, the DMCA has so many benefits and is so helpful to sites on the one hand; but again, enlisting every human on the earth to review all your takedown requests does seem like a difficult structure to implement.

Michael: Well, I mean, the DMCA has all kinds of flaws, and here's another one.

Denise: Yeah.

Michael: I've never liked it; I even hate teaching it, but you have to because it's a big piece of the landscape.

Denise: Yep.

Michael: I want to give you my tip of the week.

Denise: Sure. Please do.

Michael: Read Evan Brown's blog on Internet law cases. It's great; I've been reading it for a long time.

Denise: It is.

Evan: Thank you. Thank you.

Denise: It's one of our favorites. Evan blogs at internetcases.com and finds the time to actually post substantive analyses of new and really interesting developments, too. It's the curation, Evan, that you do. You always pick great cases to write about and analyze, so we definitely appreciate your efforts. I know it's not easy to make the time to do it.

Evan: Well, thank you very much, yeah. I appreciate the plug. It's fun to do.

Denise: Sure. All right. Well, this show has been inordinately fun to do. I'm so glad that you all were able to take the time and join us today. Michael Froomkin, such a pleasure to meet you and get to chat. I know that you're a busy guy, and the fall term — is it started yet or imminently starting?

Michael: It's been going two weeks. I've even finished the first case.

Denise: Oh, my gosh. So good.

Evan: (Laughs)

Denise: And your students are so fortunate to have such a great professor to learn from. Is there anything going on — we know We Robot is in our future. Anything else going on at University of Miami that you want to plug before we go ahead and get out of here?

Michael: Well, We Robot's going to be at the University of Washington this year.

Denise: Right.

Michael: It's going to be April 10th to 11th. I have an online law journal I run called Jotwell.com which is about the latest legal writing; and we're having a conference here in a couple months. So if you go to Jotwell.com, you can learn all about that. I guess that's the other thing I should plug.

Denise: Very cool. We will follow along with both of those.

Michael: Man, that was quick. That's impressive.

Denise: Yes, that was quick. Padre SJ, the digital Jesuit, we are so thrilled that you could join us today. Thank you so much for taking time out of your Friday.

Fr. Robert: Thanks for the invitation; it's always fun to be on this show. I mean, you get to talk about a lot of the stuff that I would love to talk to on my shows, but it just doesn't quite fit. (Laughs)

Denise: Yeah, I know. We always get mission creep critiques, I think, on both of our shows. We like to talk about robots, and we struggle to find the legal angle.

Fr. Robert: (Laughs)

Denise: Fortunately, it's not hard to find. I think you like to talk about legal things, and your audience says, Hey, what does this have to do with implementing my security program?

Fr. Robert: My network. It's always — but the law's cool! (Laughs)

Denise: It is fun.

Fr. Robert: Come on, the law! Enterprise people love the law.

Denise: Yeah.

Evan: They think "root canal."

Fr. Robert: Yes. (Laughs)

Michael: (Laughs)

Denise: Or theft, one of the two. (Laughs)

Fr. Robert: But no, thank you very much for — and thank you both for being on TWIET a few weeks back. We will do that again because it's always nice to be able to clear out the legal docket. We like to save up some of the legal-heavy stories because we can give our opinions, but it's always nice to get people who are actually in the know.

Denise: Sure, we'd love to come back sometime soon. Thanks for the offer.

All right, folks, we really appreciate your tuning in with us. It is a Friday morning, and we started this show at 11:00 Pacific Time. Well, it's no longer morning. 1800UTC is when we start, so if you like to watch live along with us, that's the time you should put in your calendar. But you don't need to schedule your life around us. We appreciate it when you do, but you certainly don't have to. You can get this show on demand and after the fact by going to twit.tv/twil. Our whole archive of shows is available there. If YouTube is your poison of choice, go over to Youtube.com/thisweekinlaw; you can watch us there, too. We're on iTunes, we're on Roku, various other sources. I'm sure that you've got this all covered, but we're just glad that you watch however you choose to do so. You should get in touch with us between the shows, too, because that's how we get some of our best ideas for guests and stories. I am @dhowell on Twitter; Evan is @internetcases there, same name as his blog. Or you can email us. He is evan@twit.tv; I'm denise@twit.tv. Or find us on Facebook or Google+. If you have more to say and want to get into a lengthier discussion, those are good places to do that. We really enjoyed being with you, and we'll see you next week on This Week in Law!

All Transcripts posts