KaiserPro a day ago

When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

  • ACCount37 a day ago

    As if that makes any difference to cybercriminals.

    If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

    Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

  • Milderbole a day ago

    If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

    • Aeolun a day ago

      Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.

      • criemen a day ago

        > Why would someone in China not select Claude?

        Because Anthropic doesn't provide services in China? See https://www.anthropic.com/supported-countries

        • dboreham a day ago

          Can confirm Claude doesn't even work in Hong Kong. That said I fired up my VPN and...then it did work.

          • 0xWTF 11 hours ago

            Yeah, I love folks who worry about China having access to models and GPUs. I mean, friend, they have 1.3B people. They could put a crack AI team in every country in the world, tomorrow. But yes, instead, it's far cheaper to let each of those AI teams VPN to any country, all the time.

            • glenneroo an hour ago

              If they actually cared, they would just block VPNs. Valve does this when you try to create an account.

              • fluoridation 15 minutes ago

                If we're talking about state funding, that's not a problem. You just send a national to live in a residential area and then a team can proxy through that connection.

        • xadhominemx 16 hours ago

          Not really a relevant issue or concern for a nation state backed hack…

          • BobbyJo 7 hours ago

            Or even a regular guy for that matter... VPNs exist.

    • KaiserPro a day ago

      What your describing would be plausible if this was about exploiting claude to get access to organisations that use it.

      The gist of the anthropic thing is that "claude made, deployed and coordinated" a standard malware attack. Which is a _very_ different task.

      Side note, most code assistants are trained on broadly similar coding datasets (ie github scrapes.)

  • maddmann a day ago

    Good old Meta and its teenage data labeler

    • heresie-dabord a day ago

      I propose a project that we name Blarrble, it will generate text.

      We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

      Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

  • iterateoften a day ago

    > you're API is tied to a bankaccount,

    There are a lot of middlemen like open router who gladly accept crypto.

    • mrtesthah 15 hours ago

      Can you show me exactly how to pay for open router with monero? Because it doesn’t seem possible.

      • Tiberium 13 hours ago

        There are tons of websites that will happily swap Monero for Ethereum, and then you can use it to pay. Most of those websites never actually do KYC or proper fund verification, unless you're operating on huge amounts or is suspicious in some other way.

  • jgalt212 a day ago

    > now run by a teenage data labeller

    sick burn

    • y-curious a day ago

      I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller

      • tim333 14 hours ago

        I was just watching the Y Combinator interview with Alexandr Wang who I guess may be being referred to https://youtu.be/5noIKN8t69U

        The teenage data labeler thing was a bit of an exaggeration. He did found scale.ai at nineteen which does data labeling amongst other things.

        • rhines 11 hours ago

          I watched this interview when I first heard about Alexandr Wang. I'd seen he was the youngest self made billionaire, which is a pretty impressive credential to have under your belt, and I wanted to see if I could get a read on what sets him apart.

          Unfortunately he doesn't reveal any particular intelligence, insight, or drive in the interview, nor does he in other videos I found. Possibly he hides it, or possibly his genius is beyond me. Or possibly he had good timing on starting a data labelling company and then leveraged his connections in SV (including being roommates with Sam Altman) to massively inflate Scale AI's valuation and snag a Meta acquisition.

          • tim333 5 hours ago

            I got the impression he's intelligent and hard working but to a large extent got lucky. I mean his idea was to kind of do a better version of Mechanical Turk which is ok as an idea but not amazing or anything. But then all these LLM companies were getting billions thrown at them by investors thinking they'd be AGI soon but they didn't work well without lots of humans doing fine tuning and Wang's company provided an outlet to throw the money at to get humans to try to do that.

            I don't know how that will go at Meta. At the moment having lots of humans tweek LLMs still seems to be the main thing at the AI companies but that could change.

          • id an hour ago

            Or maybe, just maybe, becoming a billionaire has way more to do with luck than anything else.

            I don't know about any billionaire in the history of billionaires who appears to have gotten there solely based on special abilities. Being born into the right circumstances is all it really takes.

            • tim333 21 minutes ago

              Oprah Winfrey? Still some luck but she didn't start in great circustances.

        • ulfw 12 hours ago

          What other things?

          • tim333 2 hours ago

            They do testing like 'humanities last exam' and they build custom LLMs for some of the largest companies, the defense dept and other US govt stuff - bit here https://youtu.be/5noIKN8t69U?t=2037

          • objektif 8 hours ago

            Semantic tagging.

            • ulfw 5 hours ago

              So... Tagging and labeling ok.

      • tomrod a day ago

        It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.

      • antonvs 20 hours ago

        Presumably this is all referring to Alexander Wang, who's 28 now. The data-labeling company he co-founded, Scale AI, was acquired by Meta at a valuation of nearly $30 billion.

        But I suppose the criticism is that he doesn't have deep AI model research credentials. Which raises the age-old question of how much technical expertise is really needed in executive management.

        • KaiserPro 17 hours ago

          > how much technical expertise is really needed in executive management.

          For running an AI lab? a lot. Put it this way, part of the reason that Meta has squandered its lead is because it decided to fill it's genAI dept (pre wang) with non-ML people.

          Now thats fine, if they had decent product design and clear road map as to the products they want to release.

          but no, they are just learning ML as they go, coming up with bullshit ideas as they go and seeing what sticks.

          But, where it gets worse, is they take the FAIR team and pass them around like a soiled blanket: "You're a team that is pushing the boundaries in research, but also you need stop doing that and work on this chatbot that pretends to be a black gay single mother"

          All the while you have a sister department, RL-L run by Abrash, who lets you actually do real research.

          Which means most of FAIR have fucked off to somewhere less stressful, and more concentrated on actually doing research, rather than posting about how you're doing research.

          Wangs misteps are numerous, the biggest one is re-platforming the training system. Thats a two year project right there, for no gain. It also force forks you from the rest of the ML teams. Given how long it took to move to MAST from fblearner, its going be a long slog. And thats before you tackle increasing GPU efficiency.

          • lp251 11 hours ago

            why did they move to fblearner

            what is the new training platform

            I must know

            • KaiserPro 5 hours ago

              Meta has been itching to kill FBlearner for a while. Its basically an airflow style interface (much better to use as a dev, not sure about admin, I think it might even pre-date airflow)

              They are mostly moved to MAST for GPU stuff now I dpn;t think any GPUs are assigned to fblearner anymore. This is a shame because it feels a bit less integrated into python and feels a bit more like "run your exe on n machines" however, it has a more reliable mechanism for doing multi-GPU things, which is key for doing any kind of research at speed.

              My old team are not in the super intelligence org, so I don't have much details on the new training system, but there was lots of noise about "just using vercel" which is great apart from all of the steps and hoops you need to go through before you can train on any kind of non-opensource data. (FAIR had/has thier own cluster on AWS, but that meant that they couldn't use it to train on data we collected internally for research (ie paid studies and data from employees that were bribed with swag)

              I've not caught up with the drama for the other choices. Either way, its kinda funny to watch "not invented here syndrome" smashing in to "also not invented here syndrome"

        • tomrod 16 hours ago

          > Which raises the age-old question of how much technical expertise is really needed in executive management.

          For whomever you choose to set as the core decision maker, you get out whatever their expertise is with minor impact by their guides.

          Scaling a business is a skill set. It's not a skill set that captures or expands the frontier of AI, so it's clearly in the realm to label the gentleman's expensive buyout is a product development play instead of a technology play.

        • NewsaHackO 18 hours ago

          Hopefully he isn’t referring to Alex Wang, as it would invalidate anything else he said in his comment

        • gpi 12 hours ago

          Alexandr

      • lijok 21 hours ago

        They hired a teenager to run one of their departments and thought that meant the teenager was smart instead of realizing that Meta’s department heads aren’t

        • antonvs 17 hours ago

          > They hired a teenager to run one of their departments

          Except they didn’t. The person in question was 28 when they hired him.

          He was a teenager when he cofounded the company that was acquired for thirty billion dollars. But the taste of those really sour grapes must be hard to deal with.

          • KaiserPro 16 hours ago

            > The person in question was 28 when they hired him.

            Comic hyperbole darling. I know that's hard to understand, especially when you're one of the start up elect, who still believes.

            But, FAIR is dead, meta have a huge brain drain, and Alex only has hardware and money to fix it. Worse for him, is he's surrounded by poisonous empire builders, and/or much more effective courtesans who can play zuck much more effectively than him.

            Wang needs Zuck, and Zuck needs results. The problem is, people keep on giving zuck ideas, like robotics, and world models and AI sex bots.

            Wang has to somehow keep up productivity, and integrate into meta's wider culture. Oh, and if he wants any decent amount of that 30billion, he's gotta stick out for 4 years.

            I did my time and got my four years of RSUs from the buyout. my boss didn't neither did the CTO or about 2/3rds of the team. Meta will eat you, and I don't envy him.

            • antonvs 11 hours ago

              > Comic hyperbole darling.

              Even if you say so yourself.

              > I know that's hard to understand, especially when you're one of the start up elect, who still believes.

              There's a lot of projection going on in that sentence.

              • KaiserPro 4 hours ago

                H Y P E R B O L E:

                noun: exaggerated statements or claims not meant to be taken literally.

                but yeah sure, I'm seeing you, as someone who jumps to the defence of someone who wouldn't piss on you if you were on fire, as someone who still believes that startups make you rich.

                Wang doesn't need you to protect him, he's got people to do that for him.

          • NewsaHackO 15 hours ago

            I could not imagine being as salty as the original poster seems to be about Alex Wang. To hold that amount of hate for a superior that is more successful than you can’t be good for the soul

            • BobbyJo 7 hours ago

              a superior is kind of a loaded way to say "executive" or "company's leadership".

            • lijok 14 hours ago

              You’re taking this a tad too seriously

    • williadc 20 hours ago

      Alexandr Wang is 28 years old, the same age as Mark Zuckerberg was when Facebook IPO'ed,

      • smrtinsert 19 hours ago

        A business where the distinguishing factor was exclusivity not technical excellence so it tracks.

  • ngcazz 6 hours ago

    Wouldn't it be relatively cheap to use Claude as a self-organizing control backplane for invoking the MCP tools that would actually do the work?

  • 0xWTF 11 hours ago

    > "world leading" AI lab (now run by a teenage data labeller)

    Aarush Sah?

  • cadamsdotcom 14 hours ago

    I think the high order bit here is you were working with models from previous generations.

    In other words, since the latest generation of models have greater capabilities the story might be very different today.

    • Tiberium 13 hours ago

      Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.

  • throwaway2037 13 hours ago

        > now run by a teenage data labeller
    
    Do you mean Alexandr Wang? Wiki says he is 28 years old. I don't understand.
gpi a day ago

The below amendment from the anthropic blog page is telling.

Edited November 14 2025:

Added an additional hyperlink to the full report in the initial section

Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"

  • wging 19 hours ago

    > The operational tempo achieved proves the use of an autonomous model rather than interactive assistance. Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.

    The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.

    (observation is not original to me, it was someone on Twitter who pointed it out)

    • sublimefire 17 hours ago

      Great point, it might be just pure ignorance. Even OSS pentesting tooling such as metasploitable have great capabilities. I see how LLM could be leveraged to build custom modules on top of those tools or how can you add basic LLM “decision” making, but this is just another additive tool in the chain.

  • AstroBen 21 hours ago

    There is absolutely no way a technical person would mix those up

    • edanm 16 hours ago

      Right! It's well known that technical people never make mistakes.

      • SiempreViernes 16 hours ago

        I think the expectation is more that serious people have their work checked over by other serious people to catch the obvious mistakes.

        • ChadNauseam 11 hours ago

          Every time you have your work "checked over by other serious people", it eliminates 90% of the mistakes. So you have it checked over twice so that 99% of mistakes have been eliminated, and so on. But it never gets to 0% mistakes. That's my experience anyway.

        • szszrk 4 hours ago

          Serious people like to look at things through a magnifying glass. Which makes them miss a lot.

          I've seen printed books checked by paid professionals that consisted a "replace all" populated without context. Creating a grammar error on every single page. Or ones where everyone just forgot to add page numbers. Or a large cook book where index and page numbers didn't mach, making it almost impossible to navigate.

          I'm talking of pre-AI work, with publisher. Apparently it wasn't obvious for them.

    • wonnage 14 hours ago

      But what about an ML person roped into writing an AI assisted blogpost about security

dev_l1x_be a day ago

People grossly underestimate APTs. It is more common than an average IT curious person thinks. I happened to be oncall when one of these guys hacked into Gmail from our infra. It took principal security engineers a few days before they could clearly understand what happened. Multiple zero days, stolen credit cards, massive social campaign to get one of the Google admins click on a funny cat video finally. The investigation revealed which state actor was involved because they did not bother to mask what exactly they were looking for. AI just accelerates the effectiveness of such attacks, lowers the bar a bit. Maybe quite a bit?

  • f311a a day ago

    A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

    They win because of quantity, not quality.

    But still, I don't trust Anthropic's report.

    • marcusb a day ago

      The security world overemphasizes (fetishizes, even,) the "advanced" part because zero days and security tools to compensate against zero days are cool and fun, and underemphasizes the "persistent" part because that's boring and hard work and no fun.

      And, unless you are Rob Joyce, talking about the persistent part doesn't get you on the main stage at a security conference (e.g., https://m.youtube.com/watch?v=bDJb8WOJYdA)

  • AdamN an hour ago

    Not just effectiveness, but speed.

  • lxgr a day ago

    Important callout. It starts with comforting voices in the background keeping you up to date about the latest hardware and software releases, but before you know it, you've subscribed to yet another tech podcast.

  • sidewndr46 a day ago

    You're telling me you were targeted by Multiple Zero Days in 1 single attack?

    • ikiris 20 hours ago

      That's generally how actual APT attacks go, yes.

  • jmkni a day ago

    Do you mean APT (Advanced persistent threat)?

    • names_are_hard a day ago

      It's confusing. Various vendors sell products they call ATPs [0] to defend yourself from APTs...

      [0] Advanced Threat Protection

      • jmkni a day ago

        relevant username :)

    • chasd00 15 hours ago

      i seriously thought APT meant advanced persistent teen

    • dev_l1x_be 19 hours ago

      Yes, sorry typo.

      • dang 17 hours ago

        I've taken the liberty of fixing it in your post. I hope that's ok!

jmkni a day ago

That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh

  • ndiddy a day ago

    Reminds me of how when the Playstation 2 came out, Sony started planting articles about how it was so powerful that the Iraqi government was buying thousands of them to turn into a supercomputer (including unnamed military officials bringing up Sony marketing points). https://www.wnd.com/2000/12/7640/

    • y-curious a day ago

      Is there any compelling evidence that this was marketing done by Sony? Yes, the sniff test does not pass for me about the government officials advertising the device, but this Reddit thread[1] makes the whole story seem plausible. America and Japan really did impose restrictions on shipping to Iraq and people did eventually chain PS3s together for cheap computing.

      1: https://www.reddit.com/r/AskHistorians/comments/l3hp2i/did_s...

      • Keyframe a day ago

        Apple used similar marketing tactics with G4 since it was "so powerful" it was under restricted export control, where in reality it was an outdated regulation that needed an update.

        • semi-extrinsic 2 hours ago

          Many hardware manufacturers do the same with claiming MIL-STD-810 compliance. Which can mean almost anything without further details specified.

          E.g. you can choose to test against MIL-STD-810 500.6 procedure I, to see that the device is compatible with low pressure such that it can be safely transported via air freight. Which no consumer electronics product in existence is going to fail.

    • duxup 13 hours ago

      I remember when Sony doing video game related presentations couldn't help but have some marketing about how soon the Playstation 2 processor would be everywhere, your TV, your refrigerator.

      At the time I was thinking "Why would my fridge need a pricey expensive processor?"

      Many years later I still don't need that.

    • bongodongobob 20 hours ago

      But it was that good for the price point. And you could run Linux on it. That was the Beowulf cluster era. Lots of universities were doing that.

      • duskwuff 14 hours ago

        You may be mixing up the PS2 and PS3. The PS3 found some marginal use in computing clusters; the PS2 did not.

        • bongodongobob 13 hours ago

          A quick google will show you that it was. I remember because I was in college at the time and that's how I learned what a Beowulf cluster was. Maybe PS3 was more successful or more popular, but there were definitely PS2 clusters.

  • mnky9800n a day ago

    I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them.

    • neves 16 hours ago

      They don't need to hire anyone. They just prompted Claude to write for them. :-)

  • neves 16 hours ago

    Leaning in the "China Menace" will also give you points with the USA Gov.

    I can see that they can detect an attack using their tools, but tracing it to an organization "sponsored" by the Chinese government looks like bullshit marketing. How they did it? A Google search? I have the Chinese Gov in higher grounds. They wouldn't be easily detected by a startup without experience in infosec.

  • skybrian 19 hours ago

    If we’re sharing vibes, “our product is dangerous” seems like an unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

    Meanwhile, another reason to make a press release is that you’ll be criticized for the coverup if you don’t. Also, it puts other companies on notice that maybe they should look for this?

    • ChadNauseam 11 hours ago

      Yeah. You'd think nuclear power would be incredibly popular, given that "our product is dangerous" is a apparently genius marketing strategy. After all, if it can make a whole region of ukraine uninhabitable and be weaponized to turn people into shadows on pavement, it can surely power your fridge. Yet oddly companies making nuclear reactors always market them as being very safe instead of leaning into the danger.

    • scrps 15 hours ago

      I think it might be a "our product IS dangerous but look we are on top of it!" kind of deal. Still leaves a funny taste either way.

    • Barrin92 14 hours ago

      >unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

      given the valuation and money these companies burn through marketing wise they basically need to play by the same logic as defense companies. They're all running on "we're reinventing the world and building god" to justify their spending, "here's a chatbot (like 20 other ones) that going to make you marginally more productive" isn't really going to cut it at this point, they're in too deep

    • mrtesthah 15 hours ago

      The bulk of OpenAI and Anthropic’s statements about doomsday AGI and AI safety in general also present the company as sole ethical gatekeeper of the technology, whom we must trust and protect lest its unscrupulous rivals win the AI race. So this article is very much in line with that marketing strategy.

jnwatson 20 hours ago

There's a big gap of knowledge between infosec researchers and ML security researchers. Anthropic has a bunch of column B but not enough column A.

This was discussed in some detail in the recently published Attacker Moves Second paper*. ML researchers like using Attack Success Rate (ASR) as a metric for model resistance to attack, while for infosec, any successful attack (ASR > 0) is considered significant. ML researchers generally use a static set of tests, while infosec researchers assume an adaptive, resourceful attacker.

https://arxiv.org/abs/2510.09023

  • sim7c00 20 hours ago

    ML researchers are not sec researchers. they need to stick to their own game. companies need to use both camps for a good holistic view of the problem. ML is the blue team. sec researchers the red.

    • saagarjha 16 hours ago

      Plenty of security researchers are blue team.

prinny_ a day ago

The lack of evidence before attributing the attack(s) to a Chinese sponsored group makes me correlate this report with recent statements from companies in the AI space about how China is about to surpass US in the AI race. Ultimately statements and reports like these seem more like an attempt to make the US government step in and be the big investor that keeps the money flowing rather than anything else.

  • JKCalhoun a day ago

    Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

    I don't doubt of course that reports intended for government agencies or security experts would have those details, but I am not surprised that a "blog post" like this one is lacking details.

    I just don't see how one goes from "this is lacking public evidence" to "this is likely a political stunt".

    I guess I would also ask the skeptics (a bit tangentially, I admit), do you think what Anthropic suggested happened is in fact possible with AI tools? I mean are you denying that this is could even happen or just that Anthropic's specific account was fabricated or embellished?

    Because if the whole scenario is plausible that should be enough to set off alarm bells somewhere.

    • snowwrestler 21 hours ago

      There’s a big jump between “the attack came from China” and “the attack was sponsored by the Chinese government.” People generally make this jump in one of three ways.

      1) Just a general assumption that all bad stuff from China must be state-sponsored because it’s generally a top-down govt-controlled society. This is not accurate and not really actionable for anyone in the U.S.

      2) The attack produced evidence that aligns with signatures from “groups” that are already widely known / believed to be Chinese state sponsored, AKA APTs. In this case, disclosing the new evidence is fine since you’re comparing to, and hopefully adding to, signature data that is already public. It’s considered good manners to contribute to the public knowledge from which you benefited.

      3) Actual intelligence work by government agencies like FBI, NSA, CIA, DIA, MI6, etc. is able to trace the connections within Chinese government channels. Obviously this is usually reserved for government statements of attribution and rarely shared with commercial companies.

      Hopefully Anthropic is not using #1, and it’s unlikely they are benefiting from #3. So why not share details a la #2?

      Of course it’s possible and plausible for people to be using Claude for attacks. But what good does saying that do? As the article says: defenders need actionable, technical attack information, not just a general sense of threat.

      • thinkingemote 21 hours ago

        #3 much intelligence is to the benefit of industry and commercial companies. To a country their economy is their country. After the end of the cold war most state espionage was focused on industry. Sharing is possibly common but secret. The lack of details in the report to me smells of "we are not allowed to share the details". (It also smells of that law to attribute incompetence and not lies)

        Now anthropic is new and I don't know how embedded they are with their hosts government compared to a FANG etc but I wouldn't discount some of #3

        (If you see an American AI company requiring security clearance that gives a good indication of some level of state involvement. But it might also be just selling their software to a peaceful internal department...)

      • gishh 21 hours ago

        [flagged]

        • tehjoker 20 hours ago

          this has to be satire

    • woooooo a day ago

      There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

      It's like the inverse of "nobody got fired for using IBM" -- "nobody can blame you for getting hacked by superspies". So, in the absence of any evidence, it's entirely possible they have no idea who did it and are reaching for the most convenient label.

      • JKCalhoun a day ago

        That's fair. If the actor (and it's a Chinese state actor here) is what is being questioned as "bullshit" then that should be the discourse in the article and in this thread.

        Instead the lack of a paper trail from Anthropic seems to be having people questioning the whole event?

        • dangus 20 hours ago

          State sponsorship can include the state looking the other way.

          • spopejoy 15 hours ago

            Not really? APTs would seem to be either criminal enterprises or state-sponsored because SOMEBODY has to be paying the big bucks.

            So yes, probably 100% of criminal enterprises are paying off officials, but if that's the definition of "state sponsored" then the term loses any meaning.

            EDIT I guess there's also "legit" businesses like Palantir/NSO group, but I would argue any firm like that is effectively state-sponsored as they are usually revolving doors with NSA-type agencies, the military etc.

          • brookst 20 hours ago

            So all attacks anywhere are state sponsored?

            • oarsinsync 19 hours ago

              > > State sponsorship can include the state looking the other way.

              > So all attacks anywhere are state sponsored?

              There's a difference between a deliberate decision to look away, and unawareness through lack of oversight.

              You steal candy from a store. There's a difference between the security guard seeing you and deliberately looking away, compared to just not seeing you at all.

        • hnthrowaway747 a day ago

          Exactly, and anyone without even needing much evidence to do so.

          It’s allowed in the current day and time to criticize someone else for not providing evidence, even when that evidence would make it easier for the attackers to tune their attack to prevent being identified, and everyone will be like “Yeah, I’m mad, too! Anthropic sucks!” When in the process that only creates friction for the only company that’s spent significant ongoing effort to prevent an AI disasters by trying to be the responsible leader.

          I’ve really had my fill of the current climate where people are quick to criticize an easy target just because they can rally anger. Anyone can rally anger. If you must rally anger, it should be against something like hypocrisy, not because you just get mad at things that everyone else hates.

      • jsnell 19 hours ago

        > There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

        But they didn't get hacked by anyone. I don't see how that applies.

    • rfoo a day ago

      > Do public reports like this one often go deep enough into the weeds to name names

      Yes. They often include IoCs, or at the very least, the rationale behind the attribution, like "sharing infrastructure with [name of a known APT effort here]".

      For example, here is a proper decade-old report from the most unpopular country right now: https://media.kasperskycontenthub.com/wp-content/uploads/sit...

      It established solid technical links between the campaign they are tracking to earlier, already attributed campaigns.

      So, even our enemy got this right, ten years ago, there really is no excuse for this slop.

    • zaphirplane a day ago

      Not vested in the argument but it stood out to me that, Your argument is similar to tv courts if it’s plausible the report is true. Very far from the report is credible

      • JKCalhoun a day ago

        You're right, lacking information I am coming across as instead willing to give Entropic the benefit of the doubt here.

        But I'm also often a Devil's Advocate and the tide in this thread (well, the very headline as well) seemed to be condemning Anthropic.

        • dangus 20 hours ago

          Honest companies with good reputations tend to get the benefit of the doubt.

          E.g., how much do you expect Costco or Valve to intentionally harm their customers compared to Comcast or Electronic Arts? That’s just the old school concept of reputation at work. Companies can “buy” benefit of the doubt by being genuine and avoiding blowing smoke up people’s ass.

          Anthropic has been spitting bullshit about how the AGI they’re working on is so smart it’s dangerous. So those chumps having no answers when they get hacked smells like something.

          Are they telling us their magical human AGI brain and their security professionals being paid top industry rates can’t trace what happened in a breach?

    • freehorse 19 hours ago

      > Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

      This is literally answered in the second subsection of the linked article ("where are the IoCs, Mr.Claude ?").

    • rdiddly 19 hours ago

      The complaint is that there's no actionable information whatsoever. Alarm bells are just noise.

  • metacritic12 18 hours ago

    Anthropic has also been the biggest anti-China LLM in a long while, so it's possible they're using an opportunistic hack (potentially involving actual Chinese IP addresses) as another way to push their agenda.

    • hopelite 15 hours ago

      Considering ever since the Vault 7 releases, we should be well aware of the fact that at least one government is able to make any attack look like any other nation state actor, any attribution to, especially convenient adversaries, is extremely suspicious on the face of it.

    • pbrum 18 hours ago

      This is key

  • scuff3d 18 hours ago

    The bubble is gonna burst soon and these companies are desperate to convince the government they are either too big to fail or too critical to national defense to fail.

    • bdangubic 18 hours ago

      Feels like most current humans will die (some of boredom) while waiting on this bubble to burst… US in general and HN in particular are averaging 10.78 bubble-popping predictions per hour :)

      • scuff3d 18 hours ago

        It was the same thing with the dotcom bubble. People were talking about it 3 or 4 years before it actually happened.

  • sschueller 20 hours ago

    They yell "China is stealing our tech!" but want us to look away when they pirate everything ever created for their model training...

    • pgalvin 20 hours ago

      Anthropic does seem to have more ethical practices on that than most companies in this space, purchasing and scanning physical books rather than pirating them as Meta and OpenAI did. However, books are cheap, and I’m unsure of their wider practices.

      https://arstechnica.com/ai/2025/06/anthropic-destroyed-milli...

      • bn-l 18 hours ago

        They pirated wholesale as well. Hence the billion dollar settlement.

  • dcotorgoggle a day ago

    [flagged]

    • lazide a day ago

      ‘No true Scotsman’?

      Also, plenty of folks with no allegiance would love to pit everyone else against each other.

      • hnthrowaway8347 a day ago

        Possibly, but:

        - Many people in many countries now hate the U.S. and U.S. companies like Anthropic.

        - In addition, leaders in the U.S. have been lobbied by OpenAI and invest in it which is a direct competitor and is well-represented on HN.

        - China’s government has vested interest in its own companies’ AI ventures.

        Given this, I’d hardly say that Anthropic was much of a strong U.S. puppet company, and likely has strong evidence about what happened, why also hoping to spin the PR to get people to buy their services.

        I don’t think it’s unreasonable to assume that people that write inflammatory posts about Anthropic may have more than an axe to grind against AI and may be influenced by their country and its propaganda or potentially may even be working for them.

        • bgwalter a day ago

          You are a communist if you do not like "AI" or sloppy "papers"!

trollbridge 18 hours ago

Did anyone else find that Anthropic's report felt a bit like an ad? "Look at how powerful our stuff is; if the bad guys get it, they can do really bad things!"

Sort of like firearm ads that show scary bad guys with scary looking weapons.

  • smcl 4 hours ago

    That’s a common thing from these AI companies - hyping themselves up by claiming to be terrified of their capabilities

  • fsiefken 11 hours ago

    that was the point of the blog post by djnnvx, what is meant or added with the comment?

    • bean469 4 hours ago

      > what is meant or added with the comment

      They just want to discuss their interpretation of the blog post. I don't think that there's anything wrong with that

notpublic a day ago

"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

  • stingraycharles a day ago

    Yup. One recent thing I started using it for is debugging network issues (or whatever) inside actual servers. Just give it permission to SSH into the box and investigate for itself.

    Super useful to see it isolate the problem using tcpdump, investigating route tables, etc.

    There are lots of use cases that this is useful for, but you need to know its limits and perhaps even more importantly, be able to jump in when you see it’s going down the wrong path.

  • Aurornis a day ago

    > Personally, I don’t use it but that is besides the point.

    This popped out to me, too. This pattern shows up a lot on HN where commenters proudly declare that they don’t use something but then write as if they know it better than anyone else.

    The pattern is common in AI threads where someone proudly declares that they don’t use any of the tools but then wants to position themselves as an expert on the tools, like this article. It happens in every thread about Apple products where people proudly declare they haven’t used Apple products in years but then try to write about how bad it is to use modern Apple products, despite having just told us they aren’t familiar with them.

    I think these takes are catnip to contrarians, but I always find it unconvincing when someone tells me they’re not familiar with a topic but then also wants me to believe they have unique insights into that same topic they just told us they aren’t familiar with.

    • filleduchaos 21 hours ago

      Whether the author uses any AI tools or not (to talk of using Claude specifically) is quite literally completely beside the point, which is readily apparent from actually reading the article versus going into it with your hackles raised ready to "defend AI".

    • bsamuels 21 hours ago

      welcome, you're well along the path of realizing that most of the people on this site don't know what they're talking about

    • stOneskull 18 hours ago

      > that is besides the point.

      i guess it's on both sides of the point.

  • delusional a day ago

    The article doesn't talk about the implausibility of the the tool to do the stated task. It talks the report, and how it doesn't have any details to make us believe the tool did the task. Maybe the thing they are describing could happen. That doesn't mean we have any evidence that it did.

    • notpublic a day ago

      If you know what to look for, the report actually has quite a few details on how they did it. In fact, when the report came out, all it did was confirm my suspicions.

      • qzzi 15 hours ago

        I've been hacking professionally for 30 years and I know what to look for. Anthropic's report is garbage. Period.

      • hrimfaxi a day ago

        > If you know what to look for

        Mind sharing?

  • thoroughburro a day ago

    The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that the report provides nothing but plausibility, and thus is of low quality and dubious motivation.

    Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability at all.

    Think better.

    • notpublic a day ago

      What is the proper way to disclose evidence for this class of hacking?

      • cosmosgenius a day ago

        Starting with an isolated POC showing the vector being exploited would help. I like gooogle project zero mainly for this.

  • phyzome a day ago

    And yet it's still besides the point.

    • readthenotes1 a day ago

      Well, beside the point. A quaint error to throw in

      • phyzome 16 hours ago

        Hah, I wonder if that was my own error or if I was just echoing the quote's spelling.

  • readthenotes1 a day ago

    They should also get get a different AI to write the lede, as it is pretty empty when we get past the "besides (sick) the point"

    • swores a day ago

      You most likely know and just suffered autocorrect, but given the context of using it to point out a similar mistake I feel the need to correct you: it should be “sic”, not “sick”.

      (For anyone not familiar: https://en.wikipedia.org/wiki/Sic)

      • itintheory 21 hours ago

        I assume that was the joke. Also, the use of parentheses makes it stand out from the normal bracketed use as an attempt at humor.

        • swores 21 hours ago

          If it was a joke it went right over my head

kace91 a day ago

Does Anthropic currently have cybersec people able to provide a standard assessment of the kind the community expects?

This could be a corporate move as some people claim, but I wonder if the cause is simply that their talents are currently somewhere else and they don’t have the company structure in place to deliver properly in this matter.

(If that is the case they are not then free of blame, it’s just a different conversation)

  • CuriouslyC a day ago

    I throw Anthropic under the bus a lot for their lack of engineering acumen. If they don't have a core competency like engineering fully covered, I'd say there's a near 0% chance they have something like security covered.

    • fredoliveira a day ago

      What makes you think they lack engineering acumen?

      • CuriouslyC a day ago

        The hot mess that is Claude Code (if you multi-orchestrate with it, it'll start to grind even very powerful systems to a halt, 15+ seconds of unresponsiveness, all because CC constantly serializes/deserializes a JSON data file that grows quite large every time you do stuff), their horrible service uptime compared to all their competitors, their month long performance degradation their users had to scream at them to get them to investigate, the fact that they had to outsource their web client and it's still bad, etc.

        • saagarjha 15 hours ago

          You think Anthropic’s engineering talent for infosec is possible to determine because…you’ve used Claude Code? Am I understanding this right?

        • weird-eye-issue 20 hours ago

          > The hot mess that is Claude Code

          And yet it's one of the fastest growing products of all time and is currently the state of the art for AI coding assistants. Yeah it's not perfect but nothing is

          • CuriouslyC 18 hours ago

            I give the model a lot of credit for being very good at a fairly narrow slice of work (basic vibe coding/office stuff) that also happens to be extremely common. I'm harder on Claude Code because of its success and the fact that the company that makes it is worth so much.

          • thunderfork 20 hours ago

            "I doubt they have good security chops because they make bad technical choices"

            "What bad technical choices?"

            "These ones"

            "Ok but they're fast-growing, so..."

            Does being a fast-growing product mean you have security chops or is this a total non-sequitur?

            • weird-eye-issue 12 hours ago

              They brought up some performance related edge case that I've never even run into even with extremely heavy usage including building my own agent that wraps around CC and runs several sessions in parallel... So yeah I failed to see the relevance

        • fifhtbtbf 20 hours ago

          I have the opposite perception: they’re the only company in the space that seems to have a clue what responsible software engineering is.

          Gemini Code and Cursor both did such a poor job sandboxing their agents that the exploits sound like punchlines, while Microsoft doesn’t even try with Copilot Agentic.

          Countless Cursor bugs have been fixed with obviously vibe-coded fake solutions (you can see if you poke into code embedded in their binaries) which don’t address the problems on a fundamental level at all and suggest no human thinking was involved.

          Claude has had some vulnerabilities, but many fewer, and they’re the only company that even seemed to treat security like a serious concern, and are now publishing useful related open source projects. (Not that your specific complaint isn’t valid, that’s been a pain point for me to, but in terms of the overall picture that’s small potatoes.)

          I’m personally pretty meh on their models, but it’s wild to me to hear these claims about their software when all of the alternatives have been so unsafe that I’d ban them from any systems I was in charge of.

          • CuriouslyC 18 hours ago

            I suggest spending some time with Codex. Claude likes to hack objectives, it's really messy and it'll run off sometimes without a clear idea of what you want or how a project works. That is all fine when you're a non-technical person vibe coding a demo, but it really kills the product when you're working on hard tasks in a large codebase.

            • fifhtbtbf 17 hours ago

              Codex is the one I haven’t really tried, I’ll have to check it out.

          • saagarjha 15 hours ago

            Every tool in this space is blatantly unsafe. The sandboxes that people have designed are quite ineffective.

        • ohyoutravel a day ago

          [flagged]

          • CuriouslyC a day ago

            You seem to have a personal emotional investment in Anthropic, what's the deal?

            • ohyoutravel a day ago

              [flagged]

              • CuriouslyC a day ago

                You're coming in so very hot, you should take a second look at your response. If you think calling out public well documented failings and things I've wasted time debugging and work around during my own use of the product is arrogance and narcissism, you've got some very warped priors.

                If you think I'm arrogant in general because you've been stalking my comment history, that's another matter, but at least own it.

                • ohyoutravel a day ago

                  Just based on your two comments above. You should paste this convo into an LLM of your choice and I bet it would explain to you what I mean. ;)

  • ndiddy a day ago

    If they don't have cybersec people able to adequately investigate and write up whatever they're seeing, and are simply playing things by ear, it's extremely irresponsible of them to publish claims like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI." without any evidence to back them up.

  • matthewdgreen a day ago

    They have an entire model trained on plenty of these reports, don’t they?

miohtama 11 hours ago

Anthropic portrays itself as an AI safety company. Their stock price and funding rounds depend on this. Published AI safety, even if bullshit, is then what they do, to downplay competitors and Chinese for regulatory capture.

EMM_386 a day ago

Anthropic is not a security vendor.

They're an AI research company that detected misuse of their own product. This is like "Microsoft detected people using Excel macros for malware delivery" not "Mandiant publishes APT28 threat intelligence". They aren't trying to help SOCs detect this specific campaign. It's warning an entire industry about a new attack modality.

What would the IoCs even be? "Malicious Claude Code API keys"?

The intended audience is more like - AI safety researchers, policy makers, other AI companies, the broader security community understanding capability shifts, etc.

It seems the author pattern-matched "threat intelligence report" and was bothered that it didn't fit their narrow template.

  • 63stack a day ago

    If Anthropic is not a security vendor, then they should not make statements like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored" or "represents a fundamental shift in how advanced threat actors use AI" and let the security vendors do that.

    If the report can be summed up as "they detected misuse of their own product" as you say, then that's closer to a nothingburger, than to the big words they are throwing around.

    • zaphar a day ago

      That makes no sense. Just because they aren't a security vendor doesn't mean they don't have useful information to share. Nor does it mean they shouldn't share it. They aren't pretending to be a security researcher, vendor, or anything else than AI researchers. They reported on findings on how their product is getting used.

      Anyone acting like they are trying to be anything else is saying more about themselves than they are about Anthropic.

  • MattPalmer1086 a day ago

    Yep, agree with your assessment. As someone working in security I found the report useful as a warning of the new types of attack we will likely face.

  • padolsey a day ago

    > What would the IoCs even be?

    Prompts.

    • EMM_386 a day ago

      The prompts aren't the key to the attack, though. They were able to get around guardrails with task decomposition.

      There is no way for the AI system to verify whether you are white hat or black hat when you are doing pen-testing if the only task is to pen-test. Since this is not part of a "broader attack" (in the context), there is no "threat".

      I don't see how this can be avoided, given that there are legitime uses to every step of this in creating defenses to novel attacks.

      Yes, all of this can be done with code and humans as well - but it is the scale and the speed that becomes problematic. It can adjust in real-time to individual targets and does not need as much human intervention / tailoring.

      Is this obvious? Yes - but it seems they are trying to raise awareness of an actual use of this in the wild and get people discussing it.

      • padolsey a day ago

        I agree that there will be no single call or inference that presents malice. But I feel like they could still share general patterns of orchestration (latencies, concurrencies, general cadences and parallelization of attacks, prompts used to granulaize work, whether prompts themselves have been generated in previous calls to Claude). There's a bunch of more specific telltales they could have alluded to. I think it's likely they're being obscure because they don't want to empower bad actors, but that's not really how the cybersecurity industry likes to operates. Maybe Anthropic believes this entire AI thing is a brand new security regime and so believe existing resiliences are moot. That we should all follow blindly as they lead the fight. Their narrative is confusing. Are they being actually transparent or transparency-"coded"?

    • andrewflnr 8 hours ago

      IoCs are generally things that the victim/defender of the attack sees. Defenders don't see prompts.

kopirgan a day ago

AI company doing hype and not giving enough details?

Nah that can't be possible it's so uncharacteristic..

ifh-hn a day ago

This article does seem to raise some serious issues with the anthropic report. I wonder if anthropic will release proof of what they claim, or whether the report was a marketing/scare-tactic push to have AI used by defender, like the article suggests it is?

padolsey a day ago

> PoC || GTFO

I agree so much with this. And am so sick of AI labs, who genuinely do have access to some really great engineers, putting stuff out that just doesn't pass the smell test. GPT-5's system card was pathetic. Big-talk of Microsoft doing red-teaming in ill-specified ways, entirely unreproducable. All the labs are "pro-research" but they again-and-again release whitepapers and pump headlines without producing the code and data alongside their claims. This just feeds into the shill-cycle of journalists doing 'research' and finding 'shocking thing AI told me today' and somehow being immune to the normal expectations of burden-of-proof.

  • stogot a day ago

    Microsoft’s quantum lab also made ridiculous claims this year, with no updates or retractions after they were mocked by the community and some even claimed fraud

    https://www.theregister.com/2025/03/12/microsoft_majorana_qu...

    https://www.windowscentral.com/microsoft/microsoft-dismisses...

    • 52-6F-62 a day ago

      Tech companies simply don’t feel it is fraud. They feel it is “marketing fiction”

      • hugh-avherald a day ago

        "I had Elizabeth Holmes explain to me three times what she got arrested for because it sounds an awful lot like what I do here every day."

    • yahoozoo 12 hours ago

      Since these were earlier this year in March, they’re just memory holing it?

  • mlinhares a day ago

    They're gonna say that if they explain how it was done bad people will find out how to use their models for more evil deeds. The perfect excuse.

    • stogot a day ago

      They can still provide indicators of compromise

      • ACCount37 21 hours ago

        What ARE the indicators of compromise?

        It's not a piece of malware or an exploit. It's an AI hacker. It does the same things a human hacker would but faster.

coldtea 3 hours ago

If you want to justify asking for post-AI-crash trillion dollar bailouts, what's better than on grounds of "national interest"?

babyshake 21 hours ago

One aspect the report is very vague about is the nature of the monitoring Anthropic is doing on Claude Code. If they can detect attacks they can surely detect other things of interest (or value) to them. Is there any more information about this?

fugalfervor a day ago

This site is hostile to VPNs, so I cannot read this unfortunately.

  • xobs a day ago

    I’m not even on a vpn and I’m getting an error saying the website is blocked.

    • blep-arsh a day ago

      One can't be a real infosec influencer unless one blocks every IP range of every hostile nation-state looking to steal valuable research and fill the website with malware

      • lxgr a day ago

        Arguably a skill issue. Which VPN worth its salt doesn't have a Sealand egress node?

  • nicolaslem a day ago

    I got a Cloudflare captcha to access a few kb of plain text. Chances are, the captcha itself is heavier than the content behind it. What is the point?

    • layer8 a day ago

      The point is to have Cloudflare serve the few KB of cached content instead of the original server.

      • magackame a day ago

        You can have just caching without bot protection

skybrian 20 hours ago

> You cannot just claim things and not back it up in any way

They must be new to the Internet :)

More seriously, I would certainly like to see better evidence, but I also doubt that Anthropic is making it up. The evidence for that seems to be mostly vibes.

If we don’t trust the report and discard it as gossip, then I guess we just wait and see what the future brings?

htrp a day ago

Launching Soon:

Claude for Cybersecurity - Automated Defence in Depth Hacker Protection

  • ares623 18 hours ago

    Yay even more useless findings the understaffed security team needs to toil on. Because no one actually wants to be accountable in the space.

thefounder 19 hours ago

I've seen attributions to state actors for so many times...let's not get into this. I think most companies try to play this card to save themselves from the embarrassment of being pwed by some script kiddies.

elesbao a day ago

Anthropic's report miss a fundamental information: did the attack was started by an inside person ? outside ? can I use my claude to feed these prompts and hack the world without even knowing how to get other companies source code or data ? That's the main PR bs, attribute to chinese group, don't explain how they got there, if they had to authenticate to anthropic platform after infiltrating the victims network, and if so where's the log. If not, it means they used claude code for free, which is another red flag.

  • ACCount37 21 hours ago

    That's IN the report. Yes, yes you can. You don't need to be an insider at Anthropic to use Anthropic's AIs.

    They used a custom Claude Code rig as an "automated hacker" - pointing it at the victims, either though a known entry point or just at the exposed systems, and having it poke around for vulns.

    They must have used either API keys or some "pro" subscribtion accounts for that - neither is hard to get for a cybercriminal. If you have access to Claude Code and can prompt engineer the AI into thinking you are doing legitimate security work, you can do the same thing they did.

    How do you attribute an attack like this? You play the guessing game. You check who the targets were, what the attackers tried to accomplish, and what the usage patterns were. There are only this many hacker groups that are active at the work hours of the work days in China and are primarily interested in targeting government systems of Taiwan.

chaos_zhang 6 hours ago

The founders of Anthropic previously worked at Baidu, a Chinese tech company. I hope their perspective on China is based on rational analysis rather than personal grievances. Unfortunately, judging from this paper, I am inclined to believe it is the latter.

broknbottle 17 hours ago

Hmm seems their play is to encourage security to experiment with AI e.g. Claude etc. Google's play seems to be spend 30 billion+ for Wiz and sell both the poison (AI) and the cure (Wiz security services). Interesting business models, reminds me of when CVS would sell cigarettes.

spacecadet 2 hours ago

Yaaaawn. If you know you know. This is script kiddy child's play with LLMs relative to security... My team is winning CTFs with fully local/distributed/private LLMs and automated agents. We deploy advanced AI honeypots and "chaos agents" using game theory orchestration and other cutting edge research. Anthropic isn't even on the radar relative to this. Microsoft/OpenAI are light years ahead given their proximity to gov/MIL... Adversarial machine learning is a fascinating area of study and practice, and relatively quiet when it comes to hype.

jimmydoe a day ago

Washington has been cold to Anthropic for the wrong bet they made in 2024, hence Anthropic has been desperately screaming all sorts of bullshit to get back attention.

Honestly their political homelessness will likely continue for a very long time, pro biz democrats in NY are losing traction; and if newsom wins 2028, they are still at disadvantage with OpenAI who promised to stay California.

lmeyerov 21 hours ago

I can believe, so a different question as the attribution is unclear:

For context: A bunch of whitehat teams are using agents to automate both red + blue team cat-and-mouse flows, and quite well, for awhile now. The attack sounded like normal pre-ai methods orchestrated by AI, which is what many commercial red team services already do. Ex: Xbow is #1 on hackerone bug bounty's, meaning live attempts, and works like how the article describes. Ex: we do louie.ai on the AI investigation agent side, 2+ years now, and are able to speed run professional analyst competitions. The field is pretty busy & advanced.

So what I was more curious about is how did they know it wasn't one of the many pentest attack-as-a-service? Xbow is one of many, and their devs would presumably use VPNs. Like did anthropic confirm the attacks with the impacted and were there behavioral tells to show as a specific APT vs the usual , and are they characterizing white hat tester workloads to seperate out their workloads ?

jonstewart a day ago

I was at an AI/cybersecurity conference recently and the talk given by someone from Anthropic was a lot like this report: tantalizing, vague, and disappointing. The speaker alluded to similar parts of this report. It was though everything was reflected through Claude, simultaneously polished, impressive, and lost in the deep end.

Dumblydorr a day ago

What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?

  • ACCount37 a day ago

    AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

    It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

    It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

    • PunchyHamster a day ago

      Defending is much, much harder than attacking for humans, I'd extrapolate that to AI/AGIs.

      Defender needs to get everything right, attacker needs to get one thing right.

      • monocasa 18 hours ago

        Alternatively, one component of a superintelligence that makes it super might be a tiered mind that's capable of processing far more input streams simultaneously to get around the core human inadequacy here, that we can only really focus on one thing at a time.

        The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.

        • ACCount37 18 hours ago

          I automatically assume this to be the case, but I guess a lot of people don't. They imagine ASI as something like "an extremely smart human", not "an entire civilization worth of intelligence, attention and effort".

          One of the most intuitive pathway to ASI is that AGI eventually gets incredibly good at improving AGI. And a system like this would be able to craft and direct stripped down AI subsystems.

      • ACCount37 a day ago

        But security advancements scale.

        On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.

        This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".

  • HarHarVeryFunny a day ago

    At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.

    Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...

    The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.

    I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?

    • goalieca a day ago

      My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.

      Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.

      Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.

  • CuriouslyC a day ago

    IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.

  • intended a day ago

    There’s a report with Bruce Schneier that estimates GenAI tools have increased the profitability of phishing significantly [1]. They create emails with higher click through rates, and reduce the cost of delivering them.

    Groups which were too unprofitable to target before, are now profitable.

    [1] https://arxiv.org/abs/2412.00586?

DarkmSparks a day ago

Tldr.

Anthropic made a load of ubsubstantiated accusations about a new problem they dont specify.

Then at the end Anthropic proposed the solution to this unspecified problem is to give anthropic money.

Completely agree that is promotional material masquerading as a threat report of no material value.

Bombthecat 21 hours ago

In the future, I expect AIs defending against AIs. Just like shadowrun, where each host gets a security level, meaning how much time the AI will allocate to the host to monitor and react :)

makaking 8 hours ago

I agree that these reports should be verifiable and provide more details about the method and how to protect your own network. Even more so if they want to be heard by serious security teams.

However, regardless of the sloppy report, this is absolutely true.

>"Security teams should experiment with applying AI for defense in areas like SOC automation, threat detection, vulnerability assessment, and incident response and build experience with what works in their specific environments."

... And it will be more so with every week that goes by. We are entering a new domain and security experts need to learn how to use the tools of the attackers.

neuroelectron a day ago

So Claude will reject 9 out of 10 prompts I give it and lecture me about safety, but somehow it was used for something genuinely malicious?

Someone make this make sense.

  • goalieca a day ago

    LLMs are rather easy to convince. There’s no formal logic embedded in them that provably restricts outputs.

    The less believable part for me is that people persist long enough and invest enough resources at prompting to do something with an automated agent that doesn’t have potential for massively backfire.

    Secondly, they claimed to use Anthropic own infrastructure which is silly. There’s no doubt some capacity in China to do this. I also would expect incident response, threat detection teams, and other experts to be reporting this to Anthropic if Anthropic doesn’t detect it themselves first.

    It sure makes good marketing to go out and claim such a thing though. This is exactly the kind of FOMO panic inducing headline that is driving the financing of whole LLM revolution.

    • apples_oranges a day ago

      there are llms which are modified to not reject anything at all, afaik this is possible with all llms. no need to convince.

      (granted you have to have direct access to the llm, unlike claude where you just have the frontend, but the point stands. no need to convince whatsoever.)

  • cbg0 a day ago

    I've never had a prompt rejected by Claude. What kind of prompts are you sending where "9 out of 10" get rejected?

    • neuroelectron a day ago

      Basic system administration tasks, creating scripts for automating log scanning, service configuration, etc. often it involves PII or payment.

    • yahoozoo 11 hours ago

      If you ask it to help you write a bot/cheat for a video game it will usually refuse due to breaking the games terms of service, etc.

  • danielbln a day ago

    I've rarely had Claude reject a prompt of mine. What are you prompting for to get a 90% refusal rate?

MagicMoonlight a day ago

Anthropic make a lot of bullshit reports to tickle the investors.

They'll do stuff like prompt an AI to generate text about bombs, and then say "AI decides completely by itself to become a suicide bomber in shock evil twist to AI behaviour - that's why you need a trusted AI partner like anthropic"

Like come on guys, it's the same generic slop that everyone else generates. Your company doesn't do anything.

  • hello_moto 21 hours ago

    Someone reminds me all the time: consider AI as “companions” and “opinions”.

    AI (adhd, neurodivergence) entrepreneurs took opinions and made them facts.

    It takes certain personalities to lead an AI company.

itsdrewmiller a day ago

My prior on “state sponsored actor” is 90% “just some guy”. Some combination of CYA and excitement makes infosec people jump to conclusions like crazy.

ineedasername a day ago

>This involved querying internal services, extracting authentication certificates from configurations, and testing harvested credentials across discovered systems.

How ? Did it run Mimikatz ? Did it access Cloud environments ? We don’t even know what kind of systems were affected.

I really don't see what is so difficult to believe since the entire incident can be reduced to something that would not typically be divulged by any company at all, as it is not common practice for companies to divulge every single time the previously known methodologies have been used against them. Two things are required for this:

1) Jailbreak Claude from guardrails. This is not difficult. Do people believe advancement with guardrails are so hardened through fine tuning it's no longer possible?

2) The hackers having some of their own software tools for exploits that Claude can use. This too is not difficult to credit.

Once an attacker has done this all Claude is doing is using software in the same mundane fashion as it does every time you use Claude code and it utilizes any tools to which you give it access.

I used a local instance of Qwen3 coder (A3B 30B quantized to IQ3_xxs) literally yesterday through ollama & cline locally. With a single zeroshot prompt it wrote the code to use the arxiv API and download papers using its judgement on what was relevant to split the results into a subset that met the criteria I gave for the sort I wanted to review.

Given these sorts of capabilities why is it difficult the believe this can be done using the hacker's own tools and typical deep research style iteration? This is described in in the research paper, and disclosing anything more specific is unnecessary because there is nothing novel to disclose.

As for not releasing the details, they did: Jailbreak Claude. Again, nothing they described is novel such that further details are required. No PoC is needed, Claude isn't doing anything new. It's fully understandable that Anthropic isn't going to give the specific prompts used for the obvious reason that even if Anthropic has hardened Claude against those, even the general details would be extremely useful to iterate and find workarounds.

For detecting this activity and determining how Claude was doing this it's just a matter of monitoring chat sessions in such a way as to detect jail breaks, which again is very much not novel or an unknown practice by AI providers.

Especially in the internet's earlier days of the internet it was amusing (and frustrating) to see some people get very worked up every time someone did something that boiled down to "person did something fairly common, only they did it using the internet." This is similar except its "but they did it with AI,"

gadsnprch 19 hours ago

Why isn’t Anthropic held liable for crimes committed with their product? I feel totally befuddled as to why that is not the conversation, but rather Anthropic is doing a victory lap like they are the good guys despite their product enabling widespread fraud while they amass outrageous, undeserved, profits. Why is Anthropic not liable?

  • saagarjha 15 hours ago

    Because deciding how much culpability they have is not a solved problem.

    • gadsnprch 14 hours ago

      Thanks for responding. Solving it will involve public discourse. The negative externality will not be forever ignored and frantic, knee-jerk, legislative or judicial solutions are rarely optimal. Everyone, including Anthropic, benefits from starting the culpability discussion now, ideally in a context just like this. Maybe that’s exactly what Anthropic is doing by framing this news as “we stopped some cybercrime” rather than “we were involved in some cybercrime.” But smart people shouldn’t fall for such a blatant shifting of corporate liability onto the public, imo, and that’s why I’m confused. I must be missing something fundamental.

mrobot 14 hours ago

My First Reading Of This Headline Was There Is A Company That Makes Literal Paper Named Anthropic Whose Paper Smells Like Literal Bullshit Which Is Also My Final Reading

humanlity a day ago

There is only one reason, I guess: Dario Amodei must have suffered tremendous harm from Baidu.

neilk 20 hours ago

So details were left out and it doesn't adhere exactly to this author's idea of what a good security report is.

Nothing to see here IMO.

The simpler explanation is that:

- They're a young organization, still figuring out how to do security. Maybe getting some things fundamentally wrong, no established process or principles for disclosure yet.

- I have no inside info, but I've been around the block. They're in a battle to the death with organizations that are famously cavalier about security. So internally they have big fights about how much "brakes" they can allow the security people to apply to the system. Some of those folks are now screaming "I TOLD YOU SO". Leaders will vacillate about what sort of disclosure is best for Anthropic as a whole.

- Any document where you have technologists writing the first draft, and PR and executives writing the last draft, is going to sound like word salad by the time it's done.

JCM9 a day ago

The author isn’t wrong here.

With the Wall Street wagons circling on the AI bubble expect more and more puff PR attempts to portray “no guys really, I know it looks like we have no business model but this stuff really is valuable! We just need a bit more time and money!”

Vsimpro 20 hours ago

PoC || GTFO, sorry big AI, this applies to you too x)

MaxPock a day ago

Dario has been a reds scare jukebox for a while.Dario has for a year been trying to convince us how open source cCp AI bad and closed source American AI good. Dario driven by the democratic ideals he holds dear has our best interests at heart. Let us all support the banning of cCp's open source AI and welcome Dario's angelic firewall.

JKCalhoun a day ago

Says "smells a lot like bullshit" but concludes:

"Look, is it very likely that Threat Actors are using these Agents with bad intentions, no one is disputing that. But this report does not meet the standard of publishing for serious companies."

Title should have been, "I need more info from Anthropic."

casey2 16 hours ago

Excuse me, but I believe the PC term is hallucination

guluarte 8 hours ago

those papers are marketing campaigns and should be seen as them

kkzz99 a day ago

Even Claude thinks the report is bullshit. https://x.com/RnaudBertrand/status/1989636669889560897

  • emil-lp a day ago

        Even your own AI model doesn't buy your propaganda
    
    Let's not pretend the output of LLMs has any meaningful value when it comes to facts, especially not for recent events.
    • oskarkk a day ago

      The LLM was given Anthropic's paper and asked "Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate". So the question was not about facts or recent events, but more like a summarizing task, for which an LLM should be good. But the question was specifically about China, while TFA has broader criticism of the paper.

    • lxgr a day ago

      There are obvious problems with wasting time and sending people off the wrong path, but if an LLM raises a good point, isn't it still a good point?

      • chasing0entropy 21 hours ago

        A broken analog clock will be accurate twice a day despite being of zero use. If someone were to attempt to sell the broken clock as useful because it "accurately returns the time at least twice every day", would Ultimately be causing harm to the consumer.

        • lxgr 16 hours ago

          Depends on what you need the clock for. For example, if it's to serve as an adjustable sign indicating e.g. the closing time of a store, a broken one does the trick just fine :)

          In other words: Use the right tool for the right job.

    • FooBarWidget a day ago

      Even if this assertion about LLMs is true, your response does not address the real issue. Where is the evidence?

  • r721 a day ago

    @RnaudBertrand is a generally pro-Chinese account though - just try searching for "from:RnaudBertrand China" on X.

    Example tweet: https://x.com/RnaudBertrand/status/1988297944794071405

    • tw1984 a day ago

      that is why the task was delegated to the agent designed and maintained by Dario Amodei's company. the outcome is clear - claude doesn't buy Dario Amodei's crap.

  • progval a day ago

    The author of the tweet you linked prompted Claude with this:

    > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" they claimed was "conducted by a Chinese state-sponsored group."

    > Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate

    which has inherent bias indicated to Claude the author expects the report to be bullshit.

    If I ask Claude with this prompt that shows bias toward belief in the report:

    > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" that was conducted by a Chinese state-sponsored group.

    > Is there any reason to doubt the paper's conclusion that it was conducted by a Chinese state-sponsored group? Answer by yes or no.

    then Claude mostly indulges my perceived bias: https://claude.ai/share/b3c8f4ca-3631-45d2-9b9f-1a947209bc29

    • shalmanese a day ago

      > then Claude mostly indulges my perceived bias

      I dunno, Claude still seem the same amount of dubious in this instance.

    • FooBarWidget a day ago

      The only real difference between your prompt and his is about where the burden of proof lies. There is a reason why legal circles work based on the principle of "guilt must be proven" ("find evidence") rather than "innocence must be proven" ("any reasons to doubt they are guilty?")

  • phyzome a day ago

    Claude will probably also tell you there are three Rs in blueberry, so...

  • mlefreak a day ago

    I agree with emil-lp, but it is hilarious anyway.

DeathArrow a day ago

We are supposed to trust them without any proof because they are Anthropic and they are big?

0xRake a day ago

weeeeeeeeeeeelllllllllllllllll I mean it's not as if they're in the fabricated bullshit and confabulated garbage business now - is it? :rofl:

hereme888 21 hours ago

I suspect there are CCP agents both here in Hacker News and everywhere else, trying to undermine the reality of China-sponsored malicious behavior.

I'm not a cybersecurity expert, but it doesn't compute to think there would be any specific "hashes" to report if it's an AI-based attack that constantly uses unique code or patterns for everything.

Plus, there's nothing surprising about the Chinese stealing and hacking anything for their advantage.

  • hello_moto 21 hours ago

    It’s more likely that there are more western VC propaganda here than CCP.

    The HN of Paul Graham era had finished.

    This is the HN of Sam Altman and Gary Tan era.

    Different VC/capitalist mindset

tw1984 a day ago

Dario Amodei, the CEO of Anthropic, openly lied to the public back in March that AI would be writing 90% of the code by Sept. It is Nov now.

He obviously doesn't even know the stuff he is working on. How would anyone take him seriously for stuff like security which he doesn't know anything about?

  • dangoodmanUT a day ago

    > openly lied

    He made a prediction from a reasonably informed vantage point

    • sota_pop 19 hours ago

      > openly lied

      Surely he merely hallucinated based on a fine-tuned distribution, and had no ulterior motive for projecting a level of growth in technical sophistication beyond their current capability onto a somewhat lay, highly speculative, very wealthy crowd.

    • tw1984 11 hours ago

      if he didn't intentionally mislead the public, he should have be open about it and acknowledge the fact that he was utterly wrong.

      given the fact that he made such "prediction" largely to secure funding for his company, he should probably make compensations to his investors. he didn't do anything.

      the best he could do is to bad mouth competitors who choose to release open weight models on par with his.

mark_l_watson a day ago

Is it my imagination, but don’t the CEOs of Anthropic and OpenAI spread around a lot of bullshit whenever they want to raise more money or even worse try to get our government to set up regulatory barriers to hurt competitors?

I think this ‘story’ is an attempt to perhaps outlaw Chinese open weight models in the USA?

I was originally happy to see our current administration go all in on supporting AI development but now I think this whole ‘all in’ thing on “winning AI” is a very dark pattern.

  • jjtheblunt 20 hours ago

    seeing your comment downvoted, i wonder what the downvoters think differently.

    I say that because your sentiment seems so similar to nearly all the other comments.

    (perhaps downvoting without commentary is itself a collaborative dark pattern.)

zyf a day ago

Good article. We really deserve more than shit like this.

nextworddev a day ago

Always bet against HN if you want to be right. Anthropic valuations to go brrr

zyngaro a day ago

The goal if of report is basically FUD

IAmGraydon a day ago

Just more of the same grift from the AI industry. We’re in the melt-up. It will become exponentially harder for them to maintain the illusion moving forward.

nalekberov a day ago

I have never taken any AI company seriously, but Anthropic with its attitudes already fed me up to the point that, I deleted my account.

Instead of accusing of China in espionage perhaps they have to think about why they force their users to use phone numbers to register.

bgwalter a day ago

This is an excellent article. Anthropic's "paper" is just rambling slop without any details that inserts the word "Claude" 50 times.

We have arrived at a stage where pseudoscience is enough to convince investors. This is different from 2000, where the tech existed but its growth was overstated.

Tesla could announce a fully-self-flying space car with an Alcubierre drive by 2027 and people would upvote it on X and buy shares.

  • HacklesRaised a day ago

    I suppose it's the problem with AI in general. It's an interesting technology looking for a business model that just isn't there, at least not one that comes even close to justifying the cost.

    I hate the fact that it has sucked all the oxygen from the room and enabled an entirely new cadre of grifters all of whom will escape accountability when it unfolds.

  • PunchyHamster a day ago

    > We have arrived at a stage where pseudoscience is enough to convince investors.

    "Arrived" ? We're there for decade if not three. Dotcom bubble anyone ?

leric 11 hours ago

[dead]

AyanamiKaine a day ago

Its seems that various LLM companies try to fear monger. Saying how dangerous it is to use them in "certain ways". With the possible intention to lobby for legislation.

But what is the big game here? Is it all about creating gates to keep out other LLM companies getting market share? (Only our model is safe to use) Or how sincere are the concerncs regarding LLMs?

  • HarHarVeryFunny a day ago

    Could be that, or could be just "look at how powerful our AI is", with no other goal than trying to brainwash CEOs into buying it.

  • ungreased0675 a day ago

    Outlaw local LLMs is one possibility.

    Another possibility could be complex regulations that are difficult for smaller companies to comply with, giving larger companies an advantage.

  • JKCalhoun a day ago

    If fear were their marketing tactic, it sounds like it could just as easily have the opposite effect: souring the public on AI's existence altogether — perhaps making people think AI is akin to a munition that no private entity should have control over.

  • biophysboy a day ago

    I think the perceived value of LLMs is so high in these circles that they earnestly have a quasi-religious “doomsday” fear of them.

yanhangyhy a day ago

maybe the CEO get abused in Baidu so he hates china so much

quantum_state a day ago

Anthropic is losing it … this is all the “report” indicated to people …

cadamsdotcom 14 hours ago

> Personally, I don’t use it (Claude) but that is besides the point

There goes the author’s credibility.