Transcripts

FLOSS Weekly 739, Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Doc Searls (00:00):
This is Floss Weekly. I'm Doc Searls. This week, Katherine Druckman and I talk with Goldez and Romeo. He has a much longer actual name about AI in a broadest sense, but we go so deep and there's so many. If you listen to the show, we watch this show, you're gonna come away with one liners you never thought of before that are really good and really important and really relevant, and ones you wanna do without. And that is coming up next.

Leo Laporte (00:31):
Podcasts you love from people you trust.

Doc Searls (00:39):
This is Floss Weekly, episode 739, recorded Wednesday, July 5th, 2023. Whose AI is it? Anyway,

Leo Laporte (00:50):
Listeners of this program get an ad free version if they're members of Club twit. $7 a month gives you ad-free versions of all of our shows. Plus membership in the club. Twit Discord, a great clubhouse for twit listeners. And finally, the Twit plus feed with shows like Stacy's Book Club, the Untitled Linux show, the GIZ Fizz and more. Go to twit tv slash club twit and thanks for your support.

Doc Searls (01:17):
Hello again, everybody everywhere. I am Doc Searls, and this is Floss Weekly. This week I'm joined by Katherine Druckman, herself, my companion from another podcast in Houston, Texas. Yes. How doing Katherine?

Katherine Druckman (01:32):
I'm, I'm okay. Still, still in Houston Hot.

Doc Searls (01:36):
Those hard, inconsistent, I'm okay still in Houston. There's a country song in that <laugh>. Yeah,

Katherine Druckman (01:43):
I'm surviving Houston today,

Doc Searls (01:45):
<Laugh>, it's gonna be 90 something here in Bloomington, Indiana shortly. So I'm, and I have to walk somewhere immediately after this. So that's gonna be an interesting thing cuz the humidity is, you know, 90% or something like that, I suppose. So, so our guest today is or our guests in this case we started with one guest and now we have two. Is is, is Gold Velas, who you and I together talked with a few weeks ago on our podcast, reality 2.0 which is a bit more conversational than this one. This is more of an interview style, but that I think was the most re listenable podcast that we've done. I mean, and I, I say it cuz they just finished re-listening to it, <laugh> and, and, and I, I actually stopped taking notes after like the first nine minutes cuz there were just too many subjects that were uncovered by anything else. And where are you thinking about taking this one today, Katherine?

Katherine Druckman (02:45):
Oh, I don't know. I, you know, I, I like to go back. I love the conversation about how identity relates to trust

Doc Searls (02:51):
Me too. Yeah, I'm kind

Katherine Druckman (02:52):
Of, I'm really looking forward to, to getting into that some more and picking up on everything we may have missed in the last, the last time we talked to Golda. So I think, yeah, I think we're gonna have a good time.

Golda Velez (03:02):
So thank you for focusing on that. That's, that's the key point. If that's the one thing, if everyone stops listening now, that was the key <laugh>.

Doc Searls (03:08):
Well, well this is, so, so we have you Golda vs. You're in Tucson, Arizona, and and, and your companion in, in, in work. Romeo who, whose full name is <laugh> is the longest one I've seen on the show so far. <Laugh>. So, so I'll just, I'll try to cover both of you as briefly as I can. Goal is a Jack or Jill of, of so many trades that I can't cover them all. And, and just simply one of the smartest and most engaged people I know in the technical world or maybe in the world anyway. And and, and, and Rome is a, she works with people all over the place and, and I mean literally all over the planet putting together teams and getting tech stuff done and generally affordably as well. And, and Romeo, I guess is one of the people she, she works with, he's a software engineer. He's a student in computer science at the University of Nigeria Nigeria in Inka. Is that right? Is that how Pronou knows? Yeah, it's correctly. Yeah. right. And, and, and you're <laugh> and you're, and, and you're working on, on on, on software engineering. Why don't you just take turns to tell me, tell just a little bit more about each of you just to round it out before we dive deeply into ai, which is the only topic these days.

Golda Velez (04:35):
Alright? No, it is, it is. I was just talking to the AI since 2:00 AM trying to get my work done before this podcast. So I've been talking to him all morning. <Laugh> yeah, so I, I have a, yeah, longish background came from Caltech, decided I had to do software cuz I only knew math. And decided I wanted to try to make things better. A good friend of mine there had cooperation.org. And long story short is I, I work as a, my day job is at ceramic composed db, which is an open source you know, web three database layer that's a decentralized database. And that lets me spend the rest of my time helping and advising this co-owned community startup which Romeo is a full, I don't know, I don't wanna say full-time, but you put, put a lot of your time into it and, and do a lot of work with us trying to really address the core in issue of decentralized trust. And there's plenty in between, but we can get into that later. And yeah, Romeo, why don't you talk a little bit about yourself?

Romeo Chukwuemeka (05:35):
Well I'm a front, well, I started literally at nano coach programming and where I maintain as a frontend engineer here in Nigeria. It's a software development company that deals projects. So I started my career at, yeah, as a front end and currently I'm corporation org working as a, can you say like, God, I said you won't put it full time, but then I put in a lot of efforts to make things happening in the company. Yeah, that's brilliant,

Doc Searls (06:09):
Right? So, so let's get ruling on this is something you brought up in the, on the last podcast and which is identity in ai and that ties in with provenance. Like where did this come from? It comes ties in with DIDs, which are a hot new thing, which everything gets its own cryptographic private and public key. And, but the, the biggest thing is that we, we need to be able to trust these systems. We trust who we're interacting with, trust, trust that the party we're dealing with is the real thing. Trust that what they're providing to us, the sources of that are real, all of that stuff. So and you spoke more eloquently than anybody on this, so I'm gonna set that bar high. You've already said it. Go ahead. Tell us what this is about.

Golda Velez (06:56):
How about a fall <laugh>, but anyway, I, that is actually the key thing. And the interesting thing is you hear people talking about a lot of different ways. Everyone's worried about these ais, right? I mean, who knows what they're gonna do? They're gonna imitate people. They may be firing guns pretty soon, you know, who knows what these ais are gonna do. So everyone is worried about them and people are coming up with different rules about them. I mean, you have this group in Santa Clara that's like, well, they should value humanity first and they should do this. And I mean, that's great. They should do those things. But, you know, there's, I guess my favorite Russian saying, which I don't know if it's really a Russian saying, is who guards the guards? How do you enforce these rules? And also, you know, how do you enforce the, you know, how do you watch out for the enforcers of rules?

(07:39):
And I think the core issue right there is just what you said is identity. Because if you don't know who is talking, you can't hold them accountable. If you don't know who is behind them or who's funding them, or you know who is being responsible for something, you can't hold them accountable. Now you can't actually force all the ais to identify themselves. You can't force people to do anything, right? You got free speech. People can run an AI and just make it say whatever and make it pretend to be whoever they can do that. You can't stop them from doing them. You can try to pass them laws, but it's gonna be hard. I think what you can do is you can recognize long-term identifiers. And it's interesting that you said DIDs are, you know, a new thing. They're not really a new thing, but people are just starting to use them.

(08:24):
They've been around a while. They're a W three standard and there's like 50 different did methods and people have been working on this as, you know, from going to internet identity workshop for, for quite some time. But the thing about DIDs is they're basically, you can just be a private public key payer. It can be you know, there's other forms of DIDs, like I said, there's 50 methods, but you can control your own did the same way. If you're a crypto person, which a lot of you guys are not, you can control your own wallet. So the key here is you can identify yourself, and I think the best we can actually do in reality is as a community, try to encourage it as a standard to try to make spaces where only people who identify themselves can come in. They don't have to say their name, but they have to give some long-term pseudonymous identifier that can grow a reputation where you can say, you know, Joe 77, 7 34, I don't know who he is, but he's always said smart things. And then if he does something dumb, he's burning his social credit. Oh man, like Fred 8, 8 76. Like, he tried to sell me a scam thing. Don't listen to Fred 8, 8 76, maybe we'll ban Fred, but you have to have some type of long-term identifier. And I could get all the way into game theory and evolution of trust and all that stuff. But you said this is an interview, so I'll, I'll wait for the next question.

Katherine Druckman (09:44):
Just really quickly, could you for, for those less familiar, describe what it did is?

Golda Velez (09:50):
Sure. I think, I mean, I should know I used them. I have a, did <laugh> <laugh>, it's, it's a, it's a decentralized identifier. Oh God, what does it actually stand for? What does did stand for decentralized? The decentralized identifiers, I think. Yeah, yeah. Distributed. The second D is just from the identifier word, it's not a third word. But essentially it's, it's a way that it could just be a string. I mean, it did has a u r i, it can start with like did key or did P K H, it, there's these different DID methods that you can identify your did. And so it's basically saying like, here's a prefix telling you how to read my did, and here's a long string, which is probably not human readable for most of the methods. And this is a permanent identifier. Yeah, there you go.

(10:38):
It's a W three C recommendation. And then depending on the method, there may be different ways to look it up. I mean, the, the simplest method is did key where it's literally your DID is a public key you can prove you own it cryptographically with your private key. So the other methods have different ways of handling that and different types of key management and different signing methods and different encryption strengths. But fundamentally, it's something where you have something private that you don't share that you can prove to people that this is you. And then you share the public string, you know, just like you do with any key pair. But there's methods for signing it and for lookups and for, you know, looking up who you are. And so the different did methods provide different types of functionality, but there's a lot of libraries out there that let you sign documents with them.

(11:25):
You can you know, you, that's how we use it in compose db. You can sign your data stream with a did, you can control your data stream with a did so, so it gives you this shared framework that different people can build on to allow you to control bits of data with your private key or with your custodial key or with however you're managing it. There's different DID methods with key rotation methods and so on in case you lose your key. But in any case, you have this way to control your identity yourself and then with IT can sign things and say, yeah, that was me. I did it. You can prove, you can't prove it was me physically, but you can prove it was this did. And then that did can keep that reputation. And then if you, you know, somebody, there's a count takeover, somebody's gonna, you know, fish somebody and get their keys and then it's like they will lose its reputation. But you can talk about the reputation of the did.

Doc Searls (12:16):
Wow, it's a <laugh>, sorry, where I, that's <laugh>, that's, that's a lot. And stunned by the technical I know capability that that's, that's actually the most I've heard about DIDs possibly. Are Romeo, you working with DIDs at all? Is that part of your world?

Romeo Chukwuemeka (12:33):
Yeah, yeah, yeah, yeah. I'm still new to did, but then it's, I say little things about did in the sense that it helps to control ownership. Like it empowers individuals with control and ownership over digital identities. And also DIDs helps in trust and privacy. Like it provides foundation for trust and privacy in the web. Three world decentralizing identity management individuals can choose which entities they trust and share the personal developments using it.

Golda Velez (13:06):
And actually the project you work on, Romeo, the J Link true net project allows you to sign legal documents with your, did I believe that that's Jim's project that you're working on?

Romeo Chukwuemeka (13:15):
Yeah, the J Link projects. But at, at that point, I haven't gotten to use dj.

Doc Searls (13:20):
Okay. Did I hear this? The J Link project, is that the same one that Ian Henderson works on and number of other people?

Golda Velez (13:28):
Jim Fornier works on it, I think.

Doc Searls (13:30):
Yeah. Jim Fornier. Yeah, right. Yeah, yeah. No,

Golda Velez (13:32):
Romeo j Romeo's doing the front end on that.

Doc Searls (13:35):
Oh, wonderful. Wow. Yeah, our, our friends from Scotland and the northwest of the US are working with you on this thing. Yeah, j j Link, by the way, you can look it up as J L I N C and and has a protocol. And so it's a, so, so let's go to open source with this, because when we talked before, something came up that actually had to do with identity, and it had to do with, with the trustworthiness of things that didn't necessarily comport with the stories we tell ourselves about open source to begin with. And so this is relevant to the show, which is like, okay, we know who open AI is and we know who chat g p t is. So we're talking to that you know, even though it's, it's like ingested the, all the, all the bull crap in the world.

(14:25):
We know it's coming to us from one spigot and, and who, who runs that spigot. But after Llama was released by by Facebook, and now there's this whole herd of ungulates or whatever they are, whatever breed that is that of that camels and, and llamas. They're, you know, alpacas and so forth. They're all open source and they could be used in all kinds of ways. And so where, what, what's your thinking about how we get trust inside of that, where there are end number of different systems all based on open source that could have any number of good or nefarious purposes in mind,

Golda Velez (15:02):
That it's very important that somebody build early on a lightweight framework for incorporating identity. And I think the way that should be is you should give a framework that an AI can use, and if it uses it, it can basically answer the question, who are you? And it answers with, its did. And we, we make that a very lightweight open source framework that all of the AI developers encourage them to use it. And, and the standard should be that we do an ask in ai, who are you? Who controls you? It should answer with at least an identifier. And if it refuses to answer now, you know, don't trust that ai. Now the only way we know who is running these ais is back to the very old system of bind d n s, you know, we know what domain you're talking to. That's really all you know at this point. And so it's so interesting, you know, with all this, you know, cryptographic trustless stuff, what we really know is what domain you're talking to, and that that's still the way it is. But if Theis, which may be on different platforms, may be talking to you on Facebook or Twitter or all these things, which now they have to deal with moderating these guys, they should answer a challenge. You know how, and I'm just gonna ramp just a minute, is that okay?

Katherine Druckman (16:12):
I, I please, I'm going

Doc Searls (16:13):
Please. Oh, run on. This is all, this is too good. Too good not to hear <laugh>.

Golda Velez (16:19):
You know, my parents are

Doc Searls (16:20):
Are preach sister, both

Golda Velez (16:21):
Biologists you know, a lot of game theory. I grew up hearing about genetic game theory and you know, the selfish gene, and I'm thinking about the selfish meme of selfish brains or thoughts that jump from brain to brain, which I'm feeding you right now. And, and so I'm thinking about all these things and I, you know, I was at Caltech around lots of scientists, but really the definition of science, I went to the Desert Museum with my three year, three-year-old, and there was a docent there who was a former scientist. And he was explaining in a very simple way to the parents, what is science and what is not science? And what he said to everyone was, science is something you can disprove. If you can't disprove it, it's religion or it's not science anyway, so you have to have something that's disprovable.

(17:02):
Therefore, it's not so much what you see. It's how it relates to a challenge. And I think that's the case for a lot of things. I think that's the case for like, how do you know if somebody's good? It's when you ask them, did you hurt me? And they don't shut you up, but they actually say, oh, who are you? And like, what did I do to hurt you? You, you, it's a challenge. You, you have to have a test case. And I think the test case for AI should be who are you? Who owns you? And it should answer.

Katherine Druckman (17:29):
So I, I have so many thoughts, so many thoughts, but, so there's this, so going back to, you know, to the idea of how identify, how providing said identity builds, trust in any system, any human, any whatever. But at the same time, again, when you're talking about requiring that, that ch requiring a response to that challenge from an ai, how do you, like, how do you punish an ai? Like what, how do you, how, how do, yeah, okay.

Golda Velez (18:05):
Old system. I mean, how you can

Katherine Druckman (18:06):
Do,

Golda Velez (18:08):
Sure. I mean, the same way you'd exclude any user, you, you ban it, you mute it, you give it a shadow ban, you just treat, the AI is just a user on a system. Right now, they're only text. I mean, later they're gonna be walking around, you know, with, with guns and cars, but right now they're just text. So, so we, all we have to do is kick them off the system like a user. So if they won't answer who they are, you kick 'em off.

Katherine Druckman (18:30):
But it seems to me that an, that a well-trained AI has capabilities, obviously, that a human doesn't. And, and this, I guess this is where my mind is going, <laugh>, it gets creative. But it seems to me that an AI could get around those types of moderation controls a lot better than a, than a, a nefarious a nefarious actor that is human. And I, I guess that was where my, my question, original question. Well,

Golda Velez (18:59):
I mean, it's gonna be very clever, but the thing is, if it answers who are you, it has to answer, like, we make it a simple requirement. You have to answer it with a string, A u r i, it has to give me a ui. Sure, you can give me your LinkedIn, U r i, you can give me whatever. You've gotta claim who you are. And if it, it's a did, it can also prove who it is. Cryptographically, it can be like sign something with your public case. So that's why DIDs are better. Cuz it could claim it's some guy off LinkedIn and who, you know, it really is. But if it means it's a did, you can also say, prove it. Sign something and it has to sign something. And I mean, that might get broken later, but right now cryptography is not broken yet, <laugh>.

(19:34):
So if it claims to be a public key, it can sign something with that key and prove that's who it is. So it should also, you know, it also, it should also respond to a request, a request to prove it. I mean, not too often you could, don't wanna DDoS the things, but it should respond reasonably well to a request to prove who it is. And if it's signing with a did, it can prove who it is. Now, if it says who it is and you've never seen that did before, maybe you wanna kick it off. If it says who it is and you know that's a bad reputation, did, maybe you wanna kick it off. If it says who it is and that did has a good reputation, let's leave it on whether it's a human or an ai, and if it, if it does something bad, it's burning its reputation. I mean, that's, to me, that's the only thing you can really do.

Katherine Druckman (20:16):
Okay. I, I wonder also, sorry. No, no, go for it. Going back, well there was a something, something that Romeo brought up, and that's empowerment, right? That the, the user empowerment. When you, you take control of managing your, your identity or you know, your keys or whatever it is. I wonder like where you, like, there's, there's kind of a, there's a, a gap between a, a technical user, there's a gap between Romeo and a lot of other people out there, right? And how do you see bridging that gap so that you actually do empower non-technical users? This is always a question that I have with any, of, any, any like brilliant idea that I hear, you know, it's a technical idea. That's my biggest question. How do you get to everybody else?

Romeo Chukwuemeka (21:06):
All right. Is that's question for me.

Katherine Druckman (21:08):
It's a question for

Doc Searls (21:09):
Both. You, it be, yeah, why not? Let's, yeah, <laugh>

Romeo Chukwuemeka (21:13):
Good. Wanna give us a try, you guys?

Doc Searls (21:23):
Yeah,

Romeo Chukwuemeka (21:27):
I think take this of a question. It's kinda tricky.

Golda Velez (21:34):
Yeah, no, it, it is hard. I mean that's, that's one of the hardest things because any, anytime there's always a trade off between convenience and security. If you make something super easy where you're always logged in, every time you're on your device, some guy steal your device and there he goes. I mean, I was at W RightsCon and a guy from Ghana who's in the L G B LGBTQ Q community basically said that, you know, it just a little bit hard to hear. But what they did is they took him, stuck him in a room, made him unlock his phone, and then they fished all his friends while he was locked in a room. And so, you know, that was really bad. He's okay cuz he was there at RightsCon, he's fine, but he was beat up. And so, you know, I don't know what happened to his friends, I think, I don't know, actually. So, you know, if you give all your trust to a device and you don't have physical security, we used to have a saying in, in system admin is like, if you don't have physical security, you don't have security. And we forget that about 38% of the world doesn't necessarily have physical security from their governments depending on what they're doing.

Katherine Druckman (22:33):
Yeah. So, so and how does, how does, how do you see empowering people fixing that problem?

Golda Velez (22:43):
Right? Right. Sorry. Oh, you wanted a solution, not just more problems.

Katherine Druckman (22:46):
Sorry. I mean, ideally it doesn't have to be <laugh>, you know, in an ideal world, it doesn't have to be probably

Golda Velez (22:54):
Worse. I wasn't,

Doc Searls (22:55):
We're still reverberating on they're gonna have guns and cars. So <laugh> that kinda off staged a lot of other thoughts that were, well, you know,

Golda Velez (23:05):
That MAD magazine, the spy versus five panel with, oh

Doc Searls (23:08):
Yeah, that was great.

Golda Velez (23:09):
Yeah, that's, that's all we're in.

Doc Searls (23:10):
We're just you bonus to that Yeah.

Golda Velez (23:12):
Rest of the life. But no, how does it empower people? I think it's a good question. I think actually what I am excited about in AI is it does empower people to express their intentions because now you don't have to adapt to the computer, you know, fiddling with these settings and stuff. So somebody has to walk you through it, you just tell it what you wanna do and it understands you. So to me, that's a big difference with ai, that it understands you. So if you had a trusted one, you know, that, that had different types of security settings, you could interact with it by voice and it could like, you know, listen to your voice timbers, you could tell it a few secrets and like hide them different places. I think it creates a much more natural user experience that's going to just empower a tremendous amount of things.

(23:56):
And maybe we can help ha it can help with security if we develop one that is, you know, safe to use locally. But again, there's gonna have to be some trust things where, you know, how do you know it's trustworthy? Maybe your friend tells you, maybe your technical friend says, yeah, yeah, I can see cryptographically, it's signed by somebody I trust, so yeah, you should use it. So I think that the people are gonna have to be in a trust network. Romeo might tell his friends which one is good to use, and then all they have to know is, which one should I get? And then they can just talk to it.

Katherine Druckman (24:26):
So yeah, I, I guess in an ideal world, everyone has that, that technical friend to <laugh> to help. I, I just, I, I, you know, I I I guess not specifically related to ai, but just related to taking control of one's digital identity in general, that whole conversation, I guess that that's the kind of stuff that, that interests me in general, just in, in how that could empower less technical users. But I don't know. That's always, that's always my biggest question is, is yeah, we can come up with some really great technical solutions, but what's the end user experience there and how, how approachable is it for everyone to take control over their, their identity? And the, for the exact reason that you, you mentioned before is because it's great if I can figure out how to prevent my phone contact from being leaked by a hostile actor. But if not everybody can, and that, you know, then I don't know how, how well the problem is solved.

Golda Velez (25:24):
Right? No, no. I mean, my, my take on that, and this is something I think about a lot. I mean, I used to work risk for Postmates. I used to fight organized fraudsters who were, you know, had this whole setup. They were basically another company doing fraud against us. It was like, it really was a spy versus spy thing. But, but I think a good solution has to be very user-friendly. I do think that what we're trying to work on at Link Trust and@cooperation.org is essentially making it easy to see your social graph, just like LinkedIn, but decentralized with separate signatures. So you could see who says something is true. Because what people really do is they trust some people they know. And it, you know, as long as you're a few hops away from someone who who knows something you know, maybe somebody trusts you know, trusts one of our non-technical people who then trusts Romeo or somebody trusts, you know, somebody trusts you, Katherine.

(26:17):
I mean, you're actually pretty technical, but if they trust you and you say to trust me and I say it's this one. So I think it's still a chain of trust. So as long as you're a few hops away Yeah. From someone who can recommend something, I think you're okay. And as long as we can have that decentralized linked trust, then I think we can get there. Another solution I do wanna point out is the at Proto, which is the Blue Sky Protocol uhhuh, which does make it easy to bring your identity with you from server to server. And I think that's the one big thing that they focused on first. And I think that was the right call because it's, it's very transparent. I mean, I've logged into different servers and it's just me. I log in with, you know, with my, at Proto id and I'm really logging into a different server in this case they're both controlled by, by Blue Sky, but they don't have to be. And it's just me and it gets my decentralized feed.

Doc Searls (27:08):
So I, I'd like to jump back to this light. You know, you said among the things we need is this lightweight framework for US identity. And I wanna go into what people are working on. Is anybody working on that as a, as one of the things that we need? Or is that part of just gonna fall out of identity conversations as sooner or later we get it,

Golda Velez (27:32):
We better do it soon? <Laugh>?

Doc Searls (27:34):
Yeah.

Golda Velez (27:36):
We've submitted a couple grants to work on this stuff. One of the things about open source is, you know, open source is great and sometimes there's devs who have extra time outside their day job or don't, you know, don't need to make money, and they work on open source and they're great. You know, sometimes there's companies that are essentially making money from your software as a service. I think we're going to need to find more ways to fund open source development. And maybe those will be tokenized ways. You know, that's kind of what, what Ceramic is doing. I mean, we we're gonna have to find more ways to fund open source development aside from capturing things in a platform which is what we've had to do because we've got to incentivize this. So, so I hope if somebody's listening, I hope somebody is working on it. I see a lot of groups talking about security, talking about policy. I'm not aware of a particular group working on a lightweight library and a standard for ais to say who they are. ID be happy to encourage somebody to start one or apply for a grant with somebody or anything I can do to help that happen. I think, I don't, I'm not aware of it happening yet.

Doc Searls (28:46):
You okay? You just dropped a line that is reverberating in my mind, this may even upstage the have cars and guns, which is we have to find more ways to fund open source than capturing it in a platform. That's what happened, isn't it? I mean, I mean, I mean, of course if, if Greg Crow Hartman was here, he'd, he'd disagree with that right now cuz he's, he was on a few weeks ago. He works the Linux Foundation. He's not captured in in that sense. But but you know, 25 years ago everybody was working outside the corporate world and then the corporate world starting really with IBM said, wait a minute. Oh my gosh. Our all of our Windows servers turned into samba servers and all our engineers are running everything and we, we as well get an alignment with them. And then it turned into okay, we have like, we're all about open source in a serious way.

(29:44):
And so we're hiring open source. I mean, you know, I, you see billboards saying hire open source developers, right? And maybe that's not capture, it's certainly, but it's not. I mean, if you're gonna do it for a living, you're probably gonna do it for big company. You're gonna do it for a company that, that, that is mining common interests but serving their own interests to, to a greater extent. And even though good open source comes out of it you're still on the inside. Okay. So and is there's good inside if Katherine's doing good inside at Intel, right? That's her whole job is to Sure. Trying, we're trying very hard, I assure you, <laugh>. Yeah. It's really not just about, it's not just about Intel, it's about the world. But so here's, I wanna go with this cuz it seems to me the big missing piece with AI today in the same ways, the big missing piece in computing in 1975 is the personal stuff.

(30:44):
And personal AI is as oxymoronic as personal computing was in 1974. And yet all of us need one. All of us need our own. We are absolutely, our lives are woefully complicated by, by all the, by being digital. We're digital beings now. We ha how many subscriptions do we have? How many how many contracts have we signed? How many agreements have we made? How many times have we consented? What do we own? Where is it? What is there? How do we work our healthcare? All that stuff has gotten more and more and more and more complex and it would really help to have AI on our side. You know, I I would love to, I've got books in a wall behind me for those of you who can see <laugh>, where I in my basement place here. And I wanna, I just wanna take a camera pointed at that and have an AI tell me here's what all those books are.

(31:35):
Oh, I remember when you bought these ones from Amazon and those ones in the local bookstores. I saw the receipts and you ran 'em through the little scanner that by the way, is a great business to be in right now. Make the scanner that does nothing but scan your receipts and plugs into an AI that can read. And I'm sorry, repeat <laugh>. I can read the stuff <laugh> and no, I'm from New Jersey. I'm sorry. And you know, it makes sense of it, it, I have these drawers over to my left here that are all full of wires. I would like to point a thing in there and say, that's a, that's a USB A to C. There's a usb b2 C is a USC B micro to micro to a, you know, there's a, there's a bunch of fire wires you're never gonna use again.

(32:16):
But I mean, you know, it makes sense of our lives and our, you know, last week we taught these guys from cross plane and it occurred to me that I need a cross plane. I, I need a control plane for all these different things in my life. And what I think happens in a, in a developed world, the developed world that we want is that mine that I trust, it's only mine. It's only working for me, it's my robot, right? Is interacting with ais out there that it knows are trustworthy, right? And, and it has done the searches for Providence and it is said, no, no, that's not working. This one will. And we also have ways to look into them and say, wait, wait, mine is cheating on me now or mine isn't. And it'll be TV shows about that 10 years from now. But, but that, but nobody's working on that yet in the same way as almost nobody was working on PCs in 1974. And, but probably some people are, and you may know Goda <laugh>

Golda Velez (33:09):
Well, I mean there's the Alpac Laura model like can run on a raspberry pie and you know, but then you have, you know, you can also run on aws, the SageMaker, you know, service. So, so the question is can the private really personal ones that actually will actually work for you and preserved your privacy somehow be funded or, you know, be incentivized or somehow somebody developed them to the point where they're just as easy to use as the commercial ones where people are trying to steal all your data. Because of course we know there's commercial interest that would love for you to import all your receipts and all your books and everything so they can sell your data or you know, try to Yeah. You know, or whatever. And, and how do we resist that pull because those people will be able to make the really, really slick ones because they're gonna get tons of money into it.

(33:54):
And so the question is how do we make a good enough one that is private that can talk to the slick one somehow? Like how do we make that that encapsulated space that's private? I mean, I know that's like timber, these solid project is all about having your own encapsulated Yeah. Little pods. And that may be the other standard is maybe keeping that stuff encrypted, only letting the keys, you know, letting it see when it gives the keys to your own privacy. That's the other standard we need to kind of get people to be adjusted to is, you know, what is it that you're telling the world about my stuff? You know, where is that boundary around my own stuff? And

Doc Searls (34:39):
So,

Golda Velez (34:39):
You know, again, we we we have I'm sorry, go ahead.

Doc Searls (34:41):
Well, I, I have a kind of answer to a question you had a few sentences ago which is how do we, how basically how do we, I I'll interpret it and hope I'm right. How do we come up with a logic that strips the gears of the logic right now on the sell side that says, it is good for you that we know everything about you. Cuz we could guess better at what you're going to want next so we can sell you whatever we we're selling and it's all good for you. And it was exemplified by Amazon. Amazon's kind of doing that right now and on the, on the B2C side of their, of their two-sided model where they're also selling to the sellers that are, that they're sponsors that are sponsoring you selling it. You and I think it's that you, you are going to get better information companies if we're telling you on our own as independent entities.

(35:36):
You're not guessing at us at all, or you're guessing at us very, very little. We'll invite your guessing when we're ready in the market for whatever our, this is Don Marty talking by the way. And he's been on the show and on both our shows here. But that you'll get better. The signaling from demand of supply will prove to be better than a ceiling from supply to demand. The, the knowledge based signaling from individual demand toward corporate supply will be better than the guesswork based signaling from over informed and multiply uninformed and fraudulently shared data on the sell side. And I think we can make a logical case for that. I've been trying to make it for 20 years. But I don't know if you were in the room when, to my amazement and surprise Tim Burns Lee said, the solid project, which you just mentioned was at least partly inspired by a book that I wrote, which like, just like drove me down through the floor like I did. I just hear this. So I like solid now <laugh>, I didn't, didn't care that much. I cared some about it before, but it's like, oh yeah, it's great. <Laugh>

Golda Velez (36:46):
Had,

Doc Searls (36:50):
Oh my God, somebody came through some D n A or said what? No, I didn't know that. <Laugh>, sometimes journalism can work. Who knew <laugh>, but, but there's a but I, but I'd like to probe this. There's a theory here. The theory is that a lot of ais that are ours working for us at high efficiency and high fidelity to our lives and our interests are gonna do a better job of interacting with the marketplace and interacting with entities, not just the marketplace, with governments, with political parties, with churches, with whatever. We'll will make better communities if we are better informed about ourselves and our lives and in better control of those then if we're in the world we have now where giants are busy guessing at us so they can sell this stuff. And I don't know if I, I hope what I just said is true, but I'm, I'm wondering if technically this is starting to look feasible while nobody's yet investing in it.

Golda Velez (37:50):
I mean, it, I think it is feasible. I think the problem is, is that last mile of the really good, really slick ui, you know, that takes the money to build and hopefully, I mean I I'm hopeful too, doc. I mean, I I agree with you. I think that would be better. And my hope is that, you know, by having a voice and text interface where it acts like a person, the UI can actually, it may not be as important that it guides you because actually can understand you. That's my hope. I know Rome, I'm curious what you think about that as far as Yeah,

Doc Searls (38:23):
Me, can you

Golda Velez (38:28):
Repeat the question please? Yeah, I mean, do, do you think that we can get to a place where we actually have a personal ai, whether it's running on your local Android or Raspberry Pi or whether it's actually running in the cloud, but it's encrypted, but only you can access it without selling the data somewhere. Do you think that can be compelling to people, even to non-technical people? I think,

Romeo Chukwuemeka (38:58):
Yeah, it's possible to get such the link trust project which yeah, it's kind of running on our local mission and it works, you know, even the non-technical people, people can make use of it.

Golda Velez (39:13):
Yeah, no, I mean if you're, I mean, people are just gonna expect to be able to talk to their machines and, and the question is, who are they talking to? Are they talking to kind of a, a secret entity, you know, from the world embedded in their machine or are they actually talking to, you know, someone they can trust? And, and my my hope is that when it starts to feel more like talking to a person, that we have these instincts for con artists and that will kind of be able to sense which ais really care about us, which ones are con artists. I think that's what it'll come down to.

Katherine Druckman (39:42):
Do we do we have that instinct. This is, this is also a question I guess I that runs or swims around in my head, especially when we talk about, again, things like attestation and, and identity and, and, and trust. Cuz again, when you, when, when you were talking about something like software signing, right? That makes total sense, right? You, it's signed, you know, you know the package you you're expecting is what you're getting, you know, in that context it makes sense to me. But then when you talk about identities being either human or impersonating human right, approaching human do we really have that instinct? Instinct? Well,

Golda Velez (40:20):
I think if we have the expectation that it's like somebody who really knows us, like almost interacting with them. I know people don't want us to anthropomorphize these ais, but, you know, I, I think it can't help

Katherine Druckman (40:29):
It <laugh>

Golda Velez (40:30):
I think it makes sense too because the relationships that you have that you're interacting with this entity that's acting a certain way, and I think if you think of it in terms of a, an entity that you expect to know you, you expect to understand you, I think that's where our instincts will kick in. Because when, when somebody that you expect to trust does something to try to sell you something, you're like, oh wait, no, nevermind, I don't wanna deal with that person anymore. They're not my friend.

Katherine Druckman (40:55):
But is that, is that really that common experience? Where I'm going with this is, to me, humans have a tremendous capacity for trusting the wrong people and or messages and or, you know, or platforms, whatever. I get what, what give you, what have you that that's where I'm going with this. So, so to me, again, you know, without getting political or whatever, but insert name of, of non-trust individual here, lots of people may probably do trust Sure. That individual or media personality or, or whatever. And so we, you know, and, and then so I start thinking about these kind of webs of trust and the way that, that we, as in nerds, you know, software people think about building trust through, through verifiable credentials. So let's say you do, you know, you can always prove the identity of insert controversial name. That's no problem. I mean the identity is obvious, but I, but that doesn't mean it's trustworthy. I guess that's where I'm going. That's the

Golda Velez (41:48):
Second thing. That's the second thing. Yeah, so I think there's two things. One is, you know, trusting various and en entities that you don't know well that are out there in the world that you expect to have a shallow relationship with. And yeah, you don't know you can be taken in by those. And the other is kind of testing your personal ai, who you've given your personal information to. Who is this long-term entity that's, that you are giving all your books and your receipts to, and either you trust it or you don't. You know, I think that sometimes software engineers should really not be trying to necessarily invent new things. We just need to be trying to actually instantiate the things that people are ready do. For example, you know, the in Byzantine General's problem where you have these different, you know storage entities and one of them is a bad actor you solve that with gossip, they talk about each other. The, the little stor, the little different storage entities say, Hey, storage B, he lied to me. Oh, okay, we shouldn't trust him if, if as long as more than 50% of us are still truthful. You know, you actually solved that by them gossiping about each other. So I think that all the ways that humans deal with each other at small scale terms of the tribal scale that we're we're used, that we're evolved for is kind of what we wanna instantiate. And,

(43:07):
But again, I mean this is, this is about open source. So I think the key is to, you know, make these things no as simple as possible and as simple pieces as possible that are not, you know, not part of some big stack that like, oh, you have to install this whole big stack. You know, make simple protocols, simple languages for talking about these things. And and, and you know, very easy small libraries that everybody can use as this stuff evolves so fast.

Doc Searls (43:33):
Oh my gosh, I'm taking notes here on the civil program. Simple Languages, that enough, like I, I did this. Like

Katherine Druckman (43:38):
We need an AI for this, an ai I

Doc Searls (43:40):
Know right now I need an AI to listen to this show. I just transcribe it live, but to just the pull quotes coming from this show <laugh> cause they're so good. But first I have to let everybody know before we <laugh>, we move on that we have this thing called Club Twit. And joining Club Twits a great way to support the Twit Network. As a member, you get access to ad free versions of all the shows on twit as well as benefits like twit plus, that's a feed that includes footage and discussions that didn't make the final show edit and bonus shows who started such as Hands on Mac, hands on Windows, ask me anything, fireside chats with with twit guests and co-hosts. As Floss Weekly listeners, you may also be interested in checking out another Club Twit exclusive show, the Untitled Linux Show.

(44:32):
It's one done by Jonathan Bedard is another one of the great co-hosts here. So sign up for Club Twit is just $7 a month. Head over to twit tv slash club twit and join today. And we do thank you for your support. So <laugh>, we are talking about keeping things simple. So, so I'm, I'm wondering, you know, again, going back to what's being worked on or what's not to, to what extent are own word of mouth is, is working on this thing. I mean Goldie, you're probably more engaged with more people in more places in the world than anybody I know. Do you, do you sense, have a sense of movement in that direction? Like Sparks will land on on some dry tinder here and there and things will come out of the woodwork, they get developed.

Golda Velez (45:22):
Well, I mean, I think you're saying I'm very shallow doc, but no <laugh>, which I am, I mean, like, I'm

Doc Searls (45:30):
So excited about,

Golda Velez (45:31):
But I'm not

Doc Searls (45:32):
Quo it, I know you're physically small, but I didn't think you roch shallow. I I I'm not gonna buy that one.

Golda Velez (45:37):
I think that it's a very exciting time. I mean, it's a bit of a scary time, but it's also a very, very exciting time I think. I think there's so many people who realize it's a tipping point that, you know, the AI is just exponentially changing. You know, right now I was just reading that now they've attached the thing that ran AlphaGo, which is the reinforcement learning and Google's now attaching that to the generative AI and has this new thing called Gemini that's gonna come out soon and that may blow away chat j p t. So I mean, we're just in this stage of like things building on themselves very fast, but also we're at a stage where many people have realized that wow, maybe it was bad to be captured in Facebook and Twitter. And you know, some of us may have thought that before, but I think there's a large part of, especially the development, you know, the developer engineer community, it's like, well, decentralization might be important.

(46:25):
So, so we're kind of at this very critical moment. And things are moving very fast and it's a, you know, it's hard to know because I'm, I just joined like the hugging face cuz hugging face is kind of the big open source, right? And they have a discord. And I think the conversations I'm seeing there are still pretty early on. Like, Hey, let's have a hackathon. Anybody have a business idea? You know, I wanna do this on medical. It's, you know, the acceleration is still, we're still at the kind of the beginning of that acceleration curve. So it's a good time to join it. It's a good time to throw out your ideas and to start some projects. And I, I do wanna put in a word for, for one thing that we do here, which is the the way that we're sharing work weighted equity.

(47:10):
You can do that a couple ways. Like slicing pie lets you do it. Fairman lets you do it. There's an Dows can let you do it, but colony dow, but sharing work, weighted equity on a project. So there might be some private things like maybe some of the encrypted data that's aggregated as private, but maybe the code is open and there's some owned things that you can make a living off of because people have to make a living. And doing a lot of those projects in you know, small cooperative, again, sort of tribal sized groups is my hope of how that will grow. And that those groups can, I I I can call it dinosaurs to mammals where Twitter and, and made are like dinosaurs. One little pee brain trying to control the whole body and clumsily, you know, jumping around.

(47:54):
And I, I'm hoping that we can beat that with a bunch of mammal sized little cooperatives that interact with each other. But to do that we have to develop these languages. And that's why the protocols not platforms paper was also very influential, you know, that we got at proto out of that and other people are still referring to it. So if, if we develop these languages and standards that let us have small little groups that don't try to be the center of the universe that are, you know, federated like activity pub, it's a, it's a federated standard, but it doesn't have to be just activity pub. There's other types of, of, you know, hopefully emerging standards of how AI talk to each other, how, you know, that's what I think we need. And I'm hoping that that's happening and hope that maybe some of the listeners here ideas about it. Yeah,

Doc Searls (48:37):
<Laugh> in the back channel, look at this. They have said it's a, it's a great show, <laugh>, it's a good episode. And they say the members only get a eye, get an eye patch, which is pretty cool. <Laugh> possible show title beat dinosaurs, mammal sized groups. So it seems to me an almost summary waves getting down toward the end of the show and just invite people to look into these. We may end up ideally if things go the right way which they may not, that's the world, maybe they go right and wrong with, this is almost like protocols. But cuz they are protocols and in, in, in the formal diplomatic sense with Kim Cameron's seven laws of identity on the one hand and Eleanor Ostrom's eight rules for managing a commons on the other. And just look 'em up, Kim Cameron's seven Laws Eleanor Ostrom's, eight principles, because I think those are both functional.

(49:37):
They're ideal in a way. Ostrom's are based on looking at working commons and how, how groups, whether it's, you know, groups of police, groups of, you know, farmer's, markets, all kinds of things. She won a Nobel Prize for a work on that. Kim and unfortunately passed too soon to get his, but it was very, I mean, he, he probably did more to get Microsoft on the right side of history than anybody else who worked there with those seven laws way back in in 2004. But I, I just wanted put that in as a thought without, you know, saying much more about it cuz we're toward the end of the show. So we probably need, need to sum up at this point. And because as always there's way, way, way more to talk about than we want. So what, what, what big questions have we not touched on? And maybe I'll put this one to Romeo. What would we, what have we not talked about that we should have talked about we might save for next time? That might be another way of putting it.

Romeo Chukwuemeka (50:45):
All right. I think we didn't go deep dive into open source model like both I alpaca model and adapters. Yeah. I wanted

Doc Searls (51:04):
What was, save that one for alpacas. So Goda, what do you think?

Golda Velez (51:10):
Yeah, no, I think I'm a little bit with Romeo. I wasn't sure how much we were supposed to like, get into code here. I think I was kind of high level. It would be interesting to dive into the how I think we challenged, like what should the solutions be? And it's like, okay, suppose we were gonna do it. Let's pretend we're designing it right now. What would we do?

Romeo Chukwuemeka (51:32):
Oh, something like how do we amplify the benefits of long-term identifiers?

Doc Searls (51:38):
That's a good one. How do we amplify the benefits of long-term identifiers? Because there will be both permanent and disposable identifiers. You know, barcodes and QR codes are like that. They get reused. This is why is, is a pro travel tip from the traveler. Peel off those extra barcodes, they stick on your luggage because they may be used for something else three weeks from now. But, and that's a, that's a temporary thing, but, but the world will be full of too many DIDs at some point, right. I think. And we're gonna need ways of dealing with that. What are the protocols for that?

Golda Velez (52:18):
Well, there's gotta be great garbage collection in general. We've started saving everything.

Doc Searls (52:22):
Oh man. Yeah. I mean, back in the eighties, eighties I think it was Esther DySIS said, the challenge is not adding value. It's subtracting garbage. Right. much harder <laugh> to subtract garbage. So I think, I think we're about out of, out of time now, and oh my gosh, the back channel is so many <laugh>, so much good action going on. So the, a trivial question, but a fun one anyway, what are your favorite text editor and scripting language? Oh both of you

Golda Velez (52:56):
Vim and Python. Thought it used to be Pearl, but yeah. Python.

Romeo Chukwuemeka (53:02):
All right. Mine is vs. Coon and I'm, I'm experts.

Doc Searls (53:10):
That's great. Well, this, this has been an awesome show. Thanks so much for, for, for coming on and given, given <laugh> the evanescence of plasticity and flammability and everything going on in all these topics, we'll have to have you back soon, <laugh>. Love it. This is the best. We haven't even touched the guns and cars. So that's <laugh> and that's a big one. Love that one. You got a title of that? Did the did do it <laugh> show title. Did the, did do it if it,

Katherine Druckman (53:49):
And we know what the did, did.

Doc Searls (53:51):
<Laugh>. You gotta have these guys off. You gotta Essence of Plasticity. That's another one. Oh my gosh, John, the Beard says I identify as the recon drone <laugh>. Right. We're, this is great. So <laugh>, so we'll see you next time, guys. Yeah. So Katherine <laugh>. Yeah. Screen. There's just the two of us for the moment. <Laugh>,

Katherine Druckman (54:25):
We I think we just barely touched all of the questions that I have. I, I, I, the, the great thing about talking to, talking about this topic is that all of these things just start flooding into my head and I have to like grab them as they fly by. And there are too many. And so yeah, that's why we ha we're going to have to talk about it over and over and over again until I catch all of the questions

Doc Searls (54:46):
That that's what happens, happens every time I'm talking to gold, it's like, oh, wait a minute. Which just, which just got said <laugh>. I know I didn't write. That was good. I didn't write that down. And then another

Katherine Druckman (54:57):
On, right, it's just

Doc Searls (54:58):
<Laugh>. Yeah. Yeah. It's it's pretty good. And, and I noticed on the back channel they say get Romeo back on his own. That'll be good. <Laugh>, somebody else says, I like how Romeo is a name still being given. That's great. <Laugh>. Yeah. It's, it's also a role to play, isn't it, I suppose. Yeah. wow.

Katherine Druckman (55:22):
Well that, that, that's a whole conversation about identity right there. I mean, name.

Doc Searls (55:25):
I know, I know. Yeah. It's like, it's interesting how <laugh>,

Katherine Druckman (55:28):
I'll say that for another time.

Doc Searls (55:30):
Yeah. It's interesting how as blockchain, I mean, another things go away, but I'll say blockchain came and went. Cryptocurrency and crypto, everything kind of came and went. And now there's ai. I don't think AI's gonna go, but identity's been there through all of it. It's always there. And in fact, something gold that didn't say this time that we, she did say when she talked before is that one of the problems with, with cryptocurrency is that it doesn't have identity, right? You're, you're, you're dealing with people you don't know who you're dealing with. And so we'll, we'll, we'll have to explore that on the next, the next time we talk, because I think I probably got that a bit wrong. But but there, there's a difference in kind between the conversation we had around cryptocurrency and the kind we need to have about AI that may involve both and both involve identity in different ways.

Katherine Druckman (56:26):
Well, the inter an interesting, I think thing to, to bring up about cryptocurrency is cryptocurrency at least did get mainstream adoption to an extent. You know, lots of non-technical people understand and trade in and all of that thing cryptocurrency. And that's kind of an interesting parallel when you're talking about this, this conversation about identity to

Doc Searls (56:49):
Me. Yeah, it's, yeah, I mean, a troublesome thing for me about cryptocurrency is that I knew what it was and I was too lazy to buy, you know, Bitcoin when it was 2 cents or whatever it was, you know, <laugh>.

Katherine Druckman (57:04):
Yeah. Me.

Doc Searls (57:05):
Oh, well, oh, well, you know, <laugh>, there's a lot of that. Okay, well, so give us a plug and then we'll oh, a plug. Sure. We'll punch out and we don't really, yeah, go ahead. Sorry.

Katherine Druckman (57:18):
No let's see, where else can you find me trying to catch the thoughts that fly by in my brain the Reality 2.0. And you can find me on open intel.com/podcast, both of those things.

Doc Searls (57:33):
Yes. And I, and I'm pretty sure looking at our schedule, because we just got a note very recently that our guest, we need to shuffle guests. So so we'll, we'll fix that. We'll fix that. So anyway, come back next week. We will have something good. I guarantee it. It always is. So until then, I'm Doc SRLs. This is Floss Weekly, and we'll see you then.

Jonathan Bennett (57:55):
Hey, we should talk Linux, the operating system that runs the internet, but your game consoles, cell phones, and maybe even the machine on your desk. And you already knew all that. What you may not know is that Twit now is a show dedicated to it, the Untitled Linux Show. Whether you're a Linux Pro, a burgeoning cis man, or just curious what the big deal is, you should join us on the Club Twit Discord every Saturday afternoon for news analysis and tips to sharpen your Linux skills. And then make sure you subscribe to the Club twit exclusive Untitled Linux Show. Wait, you're not a Club Twit member yet? We'll go to twit tv slash club twit and sign up. Hope to see you there.

All Transcripts posts