Between July 21 and 27, Worldcoin set off security and privacy alarms; threat actors stole from AlphaPo, CoinsPaid, Era Lend and Conic Finance; hackers set a cryptojacking record; Apple users became the target of a crypto-stealing malware and the DOJ merged its computer crime and crypto crime units.
Is the Akira ransomware story coming to an end? Security researchers say the group was competing in a competition designed by Royal to give it a new cryptolocker - but lost. Even with a free decryptor now available for Akira victims, however, it's too soon to say if the group might be doomed.
Natural language models aren't the boon to auditing many in the Web3 community hoped that generative artificial intelligence tools would be. After a burst of optimism, the consensus now is that AI tools generate well-written, perfectly formatted - and completely worthless - bug reports.
Unintended bias in artificial intelligence tops deliberate misuse when it comes to the privacy concerns around use of facial recognition in public areas, with data handled by AI, according to Harry Boje, data protection and privacy officer at Paydek.
Cybercriminals are using an evil twin of OpenAI's generative artificial intelligence tool Chat GPT. It's called FraudGPT, it's available on criminal forums, and it can be used to write malicious code and create convincing phishing emails. A similar tool called WormGPT is also available.
A startup led by former AWS and Oracle AI executives completed a Series A funding round to strengthen security around ML systems and AI applications. Seattle-based Protect AI plans to use the $35 million investment to expand its AI Radar tool and research unique threats in the AI and ML landscape.
A startup founded by two Israel Defense Forces veterans and backed by the likes of Insight Partners and Cyberstarts could soon be acquired by CrowdStrike. The endpoint security firm is in advanced negotiations to purchase Silicon Valley-based application security posture management vendor Bionic.
Supply chain compromise, open-source technology and rapid advances in AI capabilities pose significant challenges to safeguarding artificial intelligence systems. The "giant leap" achieved by systems such as ChatGPT makes it tough to discern whether someone is interacting with a human or a machine.
Attackers are increasingly using carefully crafted business logic exploits in which attackers effectively social engineer an API to do something it wasn’t intended to do, according to Stephanie Best, director of product marketing for API security at Salt Security.
Thales has agreed to purchase Imperva for $3.6 billion to enter the application and API security market and expand its footprint in data security. The deal will add a robust web application firewall along with capabilities in API protection and data discovery and classification to Thales' portfolio.
A new IBM study of data breaches found that if an organization's internal team first detects a breach and the organization has well-practiced incident response plans, that organization will be able to more quickly detect and respond, which will lead to lower breach cleanup costs.
SMBs must deal with heightened digital risk despite having less resources, personnel and intelligence than their larger counterparts, said Qualys CEO Sumedh Thakar. Firms rely on different teams and tools to discover assets, find misconfigurations and vulnerabilities, prioritize them and patch them.
The Russian-language Clop crime group's mass exploitation of MOVEit file-transfer software demonstrates how criminals continue to seek fresh ways to maximize their illicit profits with minimal effort. Ransomware response firm Coveware says Clop may clear over $75 million from this campaign.
What does generative AI mean for security? In the short term, and possibly indefinitely, we will see offensive or malicious AI applications outpace defensive ones that use AI for security. We also will see an outsized explosion in new attack surfaces. HackerOne can help you prepare your defenses.
With both excitement and fear swirling around the opportunities and risks offered by emerging AI, seven technology companies - including Microsoft, Amazon, Google and Meta - have promised the White House they would ensure the development of AI products that are safe, secure and trustworthy.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.