• 0 Posts
  • 45 Comments
Joined 11 months ago
cake
Cake day: December 16th, 2023

help-circle
  • It’s interesting axios specifically calls out the Homicide vs Violent crime statistic reliability:

    The big picture: Homicides are more straightforward to compare year-to-year from pre-2021 to the present because the criteria for classifying them have remained the same while police have changed their methods of recording other violent crimes. Beginning in 2021, the FBI and police departments started shifting to the National Incident-Based Reporting System (NIBRS) from the decades-old Summary Reporting System (SRS). That allowed law enforcement agencies to submit more details on crimes like aggravated assaults but resulted in reported surges in violent crime in cities like Chicago and Minneapolis.

    I read a (weirdly antagonistically conspiracy-themed) article about the FBI recently having to revise their statistics in 2022: https://www.realclearinvestigations.com/articles/2024/10/16/stealth_edit_fbi_quietly_revises_violent_crime_stats_1065396.html

    It makes me wonder if we need to go back and revisit the last few years of crime statistics since they switched reporting structure to get a better idea of what’s going on…




  • Reminder that all these Chat-formatted LLMs are just text-completion engines trained on text formatted like a chat. You’re not having a conversation with it, it’s “completing” the chat history you’re providing it. By randomly(!) choosing the next text tokens that seems like they best fit the text provided.

    If you don’t directly provide, in the chat history and/or the text completion prompt, the information you’re trying to retrieve, you’re essentially fishing for text in a sea of random text tokens that seems like it fits the question.

    It will always complete the text, even if the tokens it chooses minimally fit the context, it chooses the best text it can but it will always complete the text.

    This is how they work, and anything else is usually the company putting in a bunch of guide bumpers to reformat prompts into coaxing the models to respond in a “smarter” way (see GPT-4o and “chain of reasoning”)