I write about technology at theluddite.org

  • 1 Post
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • A few days later, DFCS presented Patterson with a “safety plan” for her to sign. It would require her to delegate a “safety person” to be a “knowing participant and guardian” and watch over the children whenever she leaves home. The plan would also require Patterson to download an app onto her son’s phone allowing for his location to be monitored. (The day when it will be illegal not to track one’s kids is rapidly approaching.)

    Of course there’s a grift train. I’d be very curious to know more about that company, its owners, and its financials.

    Also tagging @abucci@buc.ci (can someone tell me how to do that right?). Seems like something that might interest you, re: our recent conversation.


  • Same, and thanks! We’re probably a similar age. My own political awakening was occupy, and I got interested in theory as I participated in more and more protest movements that just sorta fizzled.

    I 100% agree re:Twitter. I am so tired of people pointing out that it has lost 80% of its value or whatever. Once you have a few billion, there’s nothing that more money can do to your material circumstances. Don’t get me wrong, Musk is a dumbass, but, in this specific case, I actually think that he came out on top. That says more about what you can do with infinite money than anything about his tactical genius, because it doesn’t exactly take the biggest brain to decide that you should buy something that seems important.





  • Totally agreed. I didn’t mean to say that it’s a failure if it doesn’t properly encapsulate all complexity, but that the inability to do so has implications for design. In this specific case (as in many cases), the error they’re making is that they don’t realize the root of the problem that they’re trying to solve lies in that tension.

    The platform and environment are something you can shape even without an established or physical community.

    Again, couldn’t agree more! The platform is actually extremely powerful and can easily change behavior in undesirable ways for users, which is actually the core thesis of that longer write up that I linked. That’s a big part of where ghosting comes from in the first place. My concern is that thinking you can just bolt a new thing onto the existing model is to repeat the original error.


  • This app fundamentally misunderstands the problem. Your friend sets you up on a date. Are you going to treat that person horribly. Of course not. Why? First and foremost, because you’re not a dick. Your date is a human being who, like you, is worthy and deserving of basic respect and decency. Second, because your mutual friendship holds you accountable. Relationships in communities have an overlapping structure that mutually impact each other. Accountability is an emergent property of that structure, not something that can be implemented by an app. When you meet people via an app, you strip both the humanity and the community, and with it goes the individual and community accountability.

    I’ve written about this tension before: As we use computers more and more to mediate human relationships, we’ll increasingly find that being human and doing human things is actually too complicated to be legible to computers, which need everything spelled out in mathematically precise detail. Human relationships, like dating, are particularly complicated, so to make them legible to computers, you necessarily lose some of the humanity.

    Companies that try to whack-a-mole patch the problems with that will find that their patches are going to suffer from the same problem: Their accountability structure is a flat shallow version of genuine human accountability, and will itself result in pathological behavior. The problem is recursive.


  • AI systems in the future, since it helps us understand how difficult they might be to deal with," lead author Evan Hubinger, an artificial general intelligence safety research scientist at Anthropic, an AI research company, told Live Science in an email.

    The media needs to stop falling for this. This is a “pre-print,” aka a non-peer-reviewed paper, published by the AI company itself. These companies are quickly learning that, with the AI hype, they can get free marketing by pretending to do “research” on their own product. It doesn’t matter what the conclusion is, whether it’s very cool and going to save us or very scary and we should all be afraid, so long as its attention grabbing.

    If the media wants to report on it, fine, but don’t legitimize it by pretending that it’s “researchers” when it’s the company itself. The point of journalism is to speak truth to power, not regurgitate what the powerful say.