

Meta/OpenAI openly pirating everything they can to train their LLMs is a good example of how data hungry these AI/etc. companies are.
Is it plausible that companies request that Reddit narrows down data e.g. by demographic, geographic location, or likelihood of being a real person and request that data for purchase? Sure, but the LLMs seemingly require all data that exists that these companies can get their hands on - I highly doubt with the scale of data being consumed (and data theft being committed) that the big players care too much about Reddit data being tainted. If anything, it might even be desirable to them.
I think their performance is relevant. Why would an employee be able to easily run an unknown binary from the internet to begin with? If the systems were properly configured to block this, there would be no issue. If I were an executive, I would absolutely be looking at my IT team in this case.
If the employee went entirely out of their way to run an unknown binary, bypassing OS-level restrictions, and sidestepping established procedures - then the employee should be fired.