• 0 Posts
  • 46 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • I think a common conclusion in general, I dated a woman once whose mind went to that explanation constantly for all kinds of things and it was basically always a distorted picture of reality. I think people just don’t get needed validation due mostly to arbitrary bullshit and the world sucking and that makes it easy to buy into toxic self hating memes.









  • The AI summaries were judged significantly weaker across all five metrics used by the evaluators, including coherency/consistency, length, and focus on ASIC references. Across the five documents, the AI summaries scored an average total of seven points (on ASIC’s five-category, 15-point scale), compared to 12.2 points for the human summaries.

    The focus on the (now-outdated) Llama2-70B also means that “the results do not necessarily reflect how other models may perform” the authors warn.

    to assess the capability of Generative AI (Gen AI) to summarise a sample of public submissions made to an external Parliamentary Joint Committee inquiry, looking into audit and consultancy firms

    In the final assessment ASIC assessors generally agreed that AI outputs could potentially create more work if used (in current state), due to the need to fact check outputs, or because the original source material actually presented information better. The assessments showed that one of the most significant issues with the model was its limited ability to pick-up the nuance or context required to analyse submissions.

    The duration of the PoC was relatively short and allowed limited time for optimisation of the LLM.

    So basically this study concludes that Llama2-70B with basic prompting is not as good as humans at summarizing documents submitted to the Australian government by businesses, and its summaries are not good enough to be useful for that purpose. But there are some pretty significant caveats here, most notably the relative weakness of the model they used (I like Llama2-70B because I can run it locally on my computer but it’s definitely a lot dumber than ChatGPT), and how summarization of government/business documents is likely a harder and less forgiving task than some other things you might want a generated summary of.







  • do they need to? I don’t think so.

    Why not? How can you be sure that all these laws are going to be about all the same things and not have many tricky edge cases? What would keep them from being like that? Again, these laws give unique rights to residents of their respective states to make particular demands of websites, and they aren’t copy pastes of each other. There’s no documented ‘best practices’ that is guaranteed to encompass all of them.

    they don’t want this solution, however, but in my understanding instead to force every state to have weaker privacy laws

    I can’t speak to what they really want privately, but in the industry letter linked in the article, it seems that the explicit request is something like a US equivalent of the GDPR:

    A national privacy law that is clear and fair to business and empowering to consumers will foster the digital ecosystem necessary for America to compete.

    To me that seems like a pretty sensible thing to be asking for; a centrally codified set of practices to avoid confusion and complexity.


  • In 2022, industry front groups co-signed a letter to Congress arguing that “[a] growing patchwork of state laws are emerging which threaten innovation and create consumer and business confusion.” In 2024, they were at it again this Congress, using the term four times in five paragraphs.

    Big Tobacco did the same thing.

    Is this really a fair comparison though? A variety of local laws about smoking in restaurants makes sense because restaurants are inherently tied to their physical location. A restaurant would only have to know and follow the rules of their town, state and country, and the town can take the time to ensure that its laws are compatible with the state and country laws.

    A website is global. Every local law that can be enforced must be followed, and the burden isn’t on legislators to make sure their rules are compatible with all the other rules. Needing to make a subtly different version of a website to serve to every state and country to be in full compliance with all their different rules, and needing to have lawyers check over all of them would create a situation where the difficulty and expense of making and maintaining a website or other online service is prohibitive. That seems like a legitimate reason to want unified standards.

    To be fair there are plenty of privacy regulations that this wouldn’t apply to, like the example the article gives of San Francisco banning the use of facial recognition tech by police. But the industry complaint linked in the article references laws like https://www.oag.ca.gov/privacy/ccpa and https://leg.colorado.gov/bills/sb21-190 that obligate websites to fulfill particular demands made by residents of those states respectively. Subtle differences in those sorts of laws seems like something that could cause actual problems, unlike differences in smoking laws.



  • If this was real I’d be interested in the details. Did anon accept fiat or crypto? Were the boxes advertised mainly on the darknet itself? Or clearnet, and if clearnet how did they get users that know wtf an onion link is or how to use it? Were the police alerted by an irate customer calling the police near the return address, or were the cops buying the cookies as a sting operation?