• 0 Posts
  • 31 Comments
Joined 1 year ago
cake
Cake day: July 5th, 2024

help-circle







  • This is actually quite an interesting case study for jury selection / vetting. The motive clearly relates to political views about the healthcare industry that affect every single American other than extreme outliers. It’s therefore pretty impossible to select a jury that can be entirely neutral. Because no matter how politically unengaged they are, it still affects them.

    Arguably, the most neutral person would be someone who hasn’t interacted much with healthcare as a citizen. But healthcare issues in America start straight away from birth, because the process of birth itself is a healthcare matter for both mother and child, and there’s no opting out from being born. That’s only not the case if you’re foreign born or from a very wealthy background, but you can’t have a jury comprised of just them because that’s not representative of the American public.

    I wouldn’t be surprised if this drags on for a long time before any trial even starts. In fact, I’d be suspicious if it doesn’t.











  • As someone who works in the field of criminal law (in Europe, and I would be shocked if it wasn’t the same in the US) - I’m not actually very worried about this. By that I don’t mean to say it’s not a problem, though.

    The risk of evidence being tampered with or outright falsified is something that already exists, and we know how to deal with it. What AI will do is lower the barrier for technical knowledge needed to do it, making the practice more common.

    While it’s pretty easy for most AI images to be spotted by anyone with some familiarity with them, they’re only going to get better and I don’t imagine it will take very long before they’re so good the average person can’t tell.

    In my opinion this will be dealt with via two mechanisms:

    • Automated analysis of all digital evidence for signatures of AI as a standard practice. Whoever can be the first person to land contracts with police departments to provide bespoke software for quick forensic AI detection is going to make a lot of money.

    • A growth in demand for digital forensics experts who can provide evidence on whether something is AI generated. I wouldn’t expect them to be consulted on all cases with digital evidence, but for it to become standard practice where the defence raises a challenge about a specific piece of evidence during trial.

    Other than that, I don’t think the current state of affairs when it comes to doctored evidence will particularly change. As I say, it’s not a new phenomenon, so countries already have the legal and procedural framework in place to deal with it. It just needs to be adjusted where needed to accommodate AI.

    What concerns me much more than the issue you raise is the emergence of activities which are uniquely AI dependent and need legislating for. For example, how does AI generated porn of real people fit into existing legislation on sex offences? Should it be an offence? Should it be treated differently to drawing porn of someone by hand? Would this include manually created digital images without the use of AI? If it’s not decided to be illegal generally, what about when it depicts a child? Is it the generation of the image that should be regulated, or the distribution? That’s just one example. What about AI enabled fraud? That’s a whole can of worms in itself, legally speaking. These are questions that in my opinion are beyond the remit of the courts and will require direction from central governments and fresh, tailor made legislation to deal with.