The time from vulnerability disclosure to exploitation is decreasing, according to a new intelligence report from Rapid7.
The post Vulnerabilities Being Exploited Faster Than Ever: Analysis appeared first on SecurityWeek.
The time from vulnerability disclosure to exploitation is decreasing, according to a new intelligence report from Rapid7.
The post Vulnerabilities Being Exploited Faster Than Ever: Analysis appeared first on SecurityWeek.
About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.
SecurityWeek Cyber Insights 2023 | The Coming of Web3 – Web3 is a term that has been hijacked for marketing purposes. Since web3 obviously represents the future internet, claiming to be web3 now is a claim to be the future today. Such claims should be viewed with caution – we don’t yet know what web3 will be.
Two of the biggest culprits are the cryptocurrency and NFT investment industries, which both use blockchains. They have claimed to be web3 so vociferously that some pundits believe that web3 is blockchain. This is way too simplistic – these are just applications running on one technology that may become one of the web3 building blocks.
Before we discuss the evolution of, and issues with, web3 in 2023 and beyond, we’ll first define one specific view of its basics.
Web3 will be the next fundamental characterization of the internet. Currently, its characteristics are clouded by confusion because it doesn’t exist. We won’t know what it is, until it is. Nevertheless, we can make some basic predictions because it will evolve from the current web2 and is bound by the rules of evolution. So, we must start with where we are to predict where we are going.
Web1 can be described as the static web. It was designed to deliver static information from information creators to information consumers. We still use web1.
Web2 can be described as the interactive web. It was designed to allow creators and consumers to interact. Three major examples are online banking, ecommerce, and social media. This is what we have now: a combination of web1 and web2.
Web3 can be described as whatever comes next. It will be an attempt to improve on web1 and web2. Most likely it will be an attempt to correct perceived faults or weaknesses in web2 and improve the users’ internet experience. We’ll focus on these characteristics in our projections for web3 focusing on decentralization and the metaverse — but remember that at this stage, it is still just conjecture.
Decentralization
A perceived fault in web2 is that it allows data to be centralized and focused in the hands of a few mega corporations. Big tech, including companies like Facebook, Microsoft, Google, and Apple own most of the world’s available data. More specifically, they own everybody’s personal information.
This is a problem both politically and socially, and is the primary driver for legislation designed to prevent big tech (and medium tech) from misusing and abusing personal data. GDPR, CCPA (and other privacy legislation), and the FTC’s increasing ‘overview’ of the misuse (which it defines as malpractice), can be seen as political attempts at correcting this fault in web2. We can add that the centralization of data is also a primary cause of cybercriminality, providing Aladdin’s Caves of rich pickings for criminals.
A better solution would be for the internet itself to reduce the stranglehold of big tech by becoming decentralized — companies do not need to own data to be able to confirm identity. Isolated attempts at decentralization already exist. Cryptocurrency (technically, at least) is an attempt to decentralize finance. The interplanetary file system (IPFS) is an attempt to decentralize data held in individual files.
One likely component of web3 will be a decentralized internet — and the distributed ledger implemented as blockchains is the most likely route. Big tech will not support this evolution.
Immersive
A move towards a more immersive internet experience is already in progress. The improvement on web2 is that users wish to move beyond interacting with the internet to becoming part of the experience. This development can be seen in the evolution of the gaming industry — from text-based adventure games, to video platform games, to 3D games and now virtual reality gaming.
But it is also apparent in business. Covid-19 created a need for remote conferencing. This was already available via telephone conferences; but the rapid rise of videoconference tools such as Zoom demonstrates users’ wish to feel more involved – or integrated with the experience. The next logical step is for videoconferencing to evolve into virtual reality conferencing using the same tools and techniques developed for virtual reality gaming.
Web2 is already evolving towards an immersive internet, and ‘immersive’ is likely to be another component of web3. The current ultimate view of an immersive experience is the metaverse.
Web3
The evolutionary pressures on the internet seem to be focusing on two characteristics: decentralization and immersiveness. This is how we will describe the next internet. Note that neither characteristic is dependent on the other, but there is synergy in their marriage. Metaverses do not need to be decentralized but can become so using distributed ledger technology (DLT). Metaverse and DLT are likely to be the key components of web3. The evolution will not be completed in 2023 (in fact, it has barely begun), but there will be much progress in that direction.
But note also that there are competing pressures. Big tech recognizes the value of the metaverse concept (Facebook has even changed its name to Meta), but big tech will not want decentralized metaverses where they lose ownership of users’ data.
As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.
The excitement with which 2022 greeted the dream of the metaverse dissipated into disillusionment over the course of the year. The technology simply isn’t ready to deliver on the dream; but that dream remains.
Massimo Paloni, chief operations and innovations officer at Italian luxury brand Bvlgari, explained both the problems and the promise of the web3 metaverse. When you buy a product, he said, especially a luxury product, you go to the store not just for the product, but also for the experience. The experience is ‘storytelling’ by the vendor — and all storytelling changes with advances in technology. But the way that technology is used must always be aligned with the vendor DNA.
“Our mission is to be sure that the use of new technology – web3, blockchain and metaverse – is aligned with our value proposition. That is the key,” he said. Web2 ecommerce fails for the luxury brands. “Ecommerce is a bidimensional experience. It kills the magic of going to the store. All ecommerce is beautifully crafted — but ultimately all the stores are similar.”
The promise of the metaverse is that it will allow vendors, especially luxury vendors, to maintain their own storytelling in engaging with their customers. Better engagement, which isn’t supported by web2, will lead to higher sales. But the problem is the technology is new and evolving, and developers still don’t know what might be available in three- or four-months’ time – nevermind a few years.
“Content is absolutely king here – the technology will undoubtedly become better and better, but this is not enough in itself,” says Lars Seier Christensen, chairman of Concordium and founder of Saxo Bank. “Users need to achieve real benefits to embrace it – across areas like entertainment, better access to goods and services, valid commercial models and otherwise unachievable experiences.” What we’ve seen so far over the last year or two has failed to deliver this, and quickly became boring and irrelevant.
“2022 wasn’t a good year for the metaverse,” adds Orlando Crowcroft, tech and innovation editor at LinkedIn. “Two of the most prominent metaverse platforms – Decentraland and Sandbox, with valuations of over $1bn each – were revealed to have under 1,000 daily active users. And Meta’s Horizon World was so unpopular that even staff had to be pressured to use it.”
But he added, “Metaverse enthusiasts should take heart. In 2023, we will see the metaverse take off – in the professional world. VR and AR are being used right now to train pilots and surgeons. Expect employers, universities, and training programs to jump into the metaverse in even bigger ways in the new year.”
Just as the metaverse is a new concept, crime in the metaverse is an unknown quantity. “Virtual cities and online worlds are new attack surfaces to fuel cybercrime,” warns Aamir Lakhani, cybersecurity researcher and practitioner for Fortinet’s FortiGuard Labs. He is concerned that a metaverse will be an open door to new cybercrime in uncharted territories.
“For example, an individual’s avatar is essentially a gateway to PII, making them prime targets for attackers. Because individuals can purchase goods and services in virtual cities, digital wallets, crypto exchanges, NFTs, and any currencies used to transact, offer threat actors yet another emerging attack surface,” he said.
He also worries about biometric hacking. The AR- and VR-driven components of virtual worlds may make it easier for a cybercriminal to steal fingerprint mapping, facial recognition data, or retina scans. Finally, he added, “The applications, protocols, and transactions within these environments are all also possible targets for adversaries.”
Kaarel Kotkas, founder and CEO at Veriff, sees trust in identity as the biggest problem. “If the metaverse is to be successful,” he said, “there needs to be a guarantee that users are who they say they are.”
“If the Metaverse is to live up to even a portion of its hype,” adds Padraic O’Reilly, co-founder and CPO at CyberSaint, “security will have to be baked in from the start. That is, it should be part of the conception. There should be a kind of cyber charter from the largest participants that stresses transparency, and laws for individuals. Cyber is everyone’s responsibility in the future.”
He also believes that regulation will be required over user identity. “To ensure the security of experiences and transactions in the metaverse, zero trust architecture and more legal protections (blockchain is too authority averse) are required. Without a central authority backing the purported ironclad data integrity of the blockchain, it will remain vulnerable.”
It is worth noting, however, that a ‘central authority’ is at least conceptually contrary to the ideal of decentralization.
Patrick Harr, CEO at SlashNext, continues this theme of identity. “Artificial intelligence solutions will be needed to validate the legitimacy of identities and controls,” he says. “This new type of digital interface will present unforeseen security risks when avatars impersonate other people and trick users into giving away personal data.”
But of course, AI will be used for attack as well as defense. Deepfaked avatars supported by AI chatbots will be used. “We can expect to see more of these holographic-type phishing attacks and fraud scams as the metaverse develops,” he continued. “In turn, folks will have to fight AI with stronger AI because we can no longer rely solely on the naked eye or human intuition to solve these complex security problems.”
Ultimately, security in the metaverse and web3 in general is both a threat and an opportunity. Traditionally, security is largely reactive – we fix things after they have been exploited. But “With web3 we have the opportunity to change the game in terms of security,” suggests Rodrigo Jorge, CISO at Vtex, “and construct something that has security by design, and is planned from user experience to the system architecture and infrastructure.”
He believes security professionals and companies have the opportunity to adopt security in this early stage so that when web3 becomes popular, it will be safe.
“Web3 reflects an architectural shift decentralizing management of platforms. As platforms decentralize, the organizations that manage them will have to find ways to federate replacement controls for those they had centrally deployed,” says Archie Agarwal, founder and CEO at ThreatModeler. “When organizations design such tectonic shifts in their architecture (like the aggressive decentralization of web3), it’s incumbent on them to model the threats and adjust their security controls that such a shift will expose.”
While cryptocurrency (as opposed to cryptocurrency technology) is peripheral to a discussion on web3, it cannot be dismissed entirely. Bitcoin demonstrated the security available in the blockchain implementation of the distributed ledger. But it is blockchain rather than cryptocurrency that is important to the development of web3.
Merav Ozair, a fintech professor at Rutgers Business School, commented on Nasdaq (December 20, 2022), “There is no doubt that the benefits of blockchain technology and web3 are immense. Jamie Dimon, CEO of JPMorgan, who has bashed bitcoin, has always been one of the great supporters of blockchain technology. JPMorgan is one of the leading companies in web3 and has made significant investments in blockchain technology, web3 and the metaverse since 2015.”
She also notes that the value of decentralization (in this case, cryptocurrency) has been demonstrated during the Ukraine/Russia conflict. The Ukrainian government has asked for donations in cryptocurrency, which has been adopted as a primary currency in the country.
“These instances underscore the promise of blockchain when Bitcoin, the first blockchain, was launched in January 2009, that a decentralized, peer-to-peer system, accessed by everyone, with no need for intermediaries, can empower everyday people: a system that is for the people, by the people,” she explained. This is the primary advantage of decentralization.
A security weakness in the unfolding web3 will come from its ‘newness’. “Looking forward, attackers are again adjusting their tactics to target individuals in the new web3 world,” comments Hank Schless, director of global campaigns at Lookout. “Since web3 is still a new concept for most people, attackers can rely on the unfamiliar environment to increase the likelihood of success. This is a common tactic, as targeted individuals may not know exactly what red flags to look for in the same way they do with a suspicious social media message.”
Christian Seifert, research at Forta, takes this further. “The current state of the De-Fi market [currently the primary implementation of decentralized blockchain], especially with mounting losses due to hacks and rug pulls, has reduced some of the trust that investors previously had in this industry,” he said.
“I believe the problem will continue to persist unless better security measures are implemented across the board. In this regard, we need an overhaul of the security strategies prevalent today to provide better end user privacy (via the use of, say, wallets) and improved protocol safety.”
In particular, he recommends “routine audits, offering bug bounties, maximizing monitoring and incident response – potentially via the use of future-ready technologies such as artificial intelligence and machine learning – and offering clients cyber insurance.”
Since the blockchain was originally developed for use in the finance sector, it should be no surprise that the finance industry is one of the more interested sectors. “There is a major trend of blockchain adoption in large financial institutions,” says Nick Landers, director of research at NetSPI, specifically citing Broadridge, Citi and BNY Mellon.
“The primary focus,” he continued, “is custodial offerings of digital assets, and private chains to maintain and execute trading contracts. Despite what popular culture would indicate, the business use cases for blockchain technology will likely deviate starkly from popularized tokens and NFTs.” Instead, he believes, industries will prioritize private chains to accelerate business logic, digital asset ownership on behalf of customers, and institutional investment in proof-of-stake chains.
By the end of next year, he expects that every major financial institution will have announced adoption of blockchain technology, if it hasn’t already. “While Ethereum, EVM, and Solidity-based smart contracts have received a huge portion of the security research, nuanced technologies like Hyperledger Fabric have received much less. In addition, the supported features in these business-focused private chain technologies differ significantly from their public counterparts.”
It is worth noting that private blockchains are not decentralized blockchains – which begs the question, are they really web3?
Either way, this ultimately means more attack surface, more potential configuration mistakes, and more required training for development teams. “If you thought that blockchain is ‘secure by default’,” added Landers, “think again. Just like cloud platform adoption, we’ll see the promises of ‘secure by default’ fall away as unique attack paths and vulnerabilities are discovered in the nuances of this technology.”
Dissatisfaction with big tech’s control of social media has led to the exploration of alternative decentralized approaches. Mastodon, as an alternative to Twitter, is one example. It is decentralized but based on federation rather than blockchain. “Instant global communication is too important to belong to one company,” explains the Mastodon website. “Each Mastodon server is a completely independent entity, able to interoperate with others to form one global social network.”
But a blockchain – more specifically a multichain – social media alternative may appear in 2023. On December 20, 2022, Beepo officially closed the beta version of its decentralized app, and expects to launch in early 2023.
At the beginning of December 2022, Concordium announced an agreement with Beepo to incorporate its native token, CCD, as a means of payment on the platform. “Beepo, a blockchain-based platform powered by E2EE and an AI/ML algorithm with a focus on privacy and security,” explained Concordium, “is protected by end-to-end encryption technology and autonomous moderation, ensuring a totally secure environment for user interactions.”
The Beepo App offers a DApp (decentralized app) browser, tools for independent contractors, features for content creators, and a multichain blockchain infrastructure that lets users engage with various tokens and multiple networks. It is a response to growing user concern over the controlling and often abusive concentration of personal data within web2 big tech firms.
The blockchain part of web3 (disregarding the question of whether private blockchains can even be considered part of web3) will probably develop faster than the metaverse during 2023. William Tyson, associate analyst in the thematic intelligence team at GlobalData, foresees a metaverse winter in 2023. He believes the immaturity of enabling technologies like virtual reality (VR) and artificial intelligence (AI), as well as cooling consumer interest, will prevent the metaverse from being adopted widely in the next year.
He adds, “The absence of a single vision for the metaverse means that its future is malleable and uncertain. Its extraordinary long-term potential is widely recognized, which is why big tech is continuing to funnel billions into its creation— despite the absence of short-term return on investment. The concept will experience a cold period, but this provides an opportunity for underlying technologies to develop.”
Meanwhile, the blockchain part of the equation will pick up steam in 2023. “We don’t have a defining trend for web3 in 2023, but that what we do have instead is an undercurrent of heads-down building and experimentation being done both by developers as well as traditional brands, setting the stage for a really exciting 2024,” says Dan Abelon, partner at Two Sigma Ventures.
“On the developer side, one area to watch is messaging: enabling decentralized services to communicate directly with end users,” he adds. “On the brand side, I’m excited to see more experiments like those by Reddit and Instagram in recent weeks, that will help bring web3 into the mainstream.”
The metaverse and blockchains are not interdependent – each can exist without the other. However, a decentralized metaverse will require blockchains. Consider a metaverse shopping mall. Like the physical mall, it will comprise multiple businesses operating effectively in one place. In the physical world, shoppers walk from one shop into another. In a web2 shopping mall, they would need a different URL and to log on and present identity credentials to each store.
In a decentralized metaverse, with identity held in a trusted blockchain, identity verification could simply be the presentation of an NFT-like token. This would confirm the user’s identity without requiring personal details to be given to every business in the metaverse – allowing the user to travel freely between the organizations of the mall metaverse.
Within each ‘shop’, three-dimensional images of goods can be investigated. Shopping baskets could be maintained by the collection of NFTs associated with the goods, and could be instantly purchased via a cryptocurrency or NFT from the user’s wallet.
The security issues are primarily fraud via user impersonation, although the user identity is protected by blockchain. One thing that is certain, however, is that as this new cyber world evolves, criminals will be looking for new ways to attack it.
Web3 will happen. What it will look like is not yet known. Blockchain technology is expanding beyond just cryptocurrency, and the use of non-investment NFTs is growing.
The attraction of a metaverse is undeniable – but we’re going through a phase of disillusionment right now. This is perhaps typified in the disappointment of Meta’s legless cartoon avatar torsos in its Horizon Worlds metaverse.
But we should remember that all of this is new technology with kinks. The AR and VR headsets are still developing; the software development is still new. The potential for the metaverse is too great to ignore. Its synergy with decentralization makes it especially attractive.
It won’t materialize for many years – but development of web3 will continue through 2023 and beyond. The immersive metaverse rather than blockchain will be the defining technology.
Related: Securing the Metaverse and Web3
Related: Hackers Steal Over $600M in Major Crypto Heist
Related: Protecting Cryptocurrencies and NFTs – What’s Old is New
Related: How Blockchain Will Solve Some of IoT’s Biggest Security Problems
The post Cyber Insights 2023 | The Coming of Web3 appeared first on SecurityWeek.
About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.
SecurityWeek Cyber Insights 2023 | Ransomware – The key purpose behind cybercriminality is to gain money. Extortion has always been a successful and preferred method to achieve this. Ransomware is merely a means of extortion. Its success is illustrated by the continuous growth of ransomware attacks over many years.
The evolution of ransomware has not been static. Its nature has changed as the criminals have refined the approach to improve the extortion, and the volume (generally upward) has ebbed and flowed in reaction to market conditions. The important point, however, is that criminals are not married to encryption, they are married to extortion.
The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions: the geopolitical influence of the Russia/Ukraine war, the improving professionalism of the criminal gangs, and more forceful attempts by governments and law enforcement agencies to counter the threat.
The Russia/Ukraine war has removed our blinkers. The world has been at covert cyberwar for many years – generally along the accepted geopolitical divide – but it is now more intense and more overt. While the major powers, so far at least, have refrained from open attacks against adversaries’ critical infrastructures, criminal gangs are less concerned.
“The rate of growth in ransomware attacks is currently slowing slightly [late 2022] – but this will prove to be a false dawn,” suggests Mark Warren, product specialist at Osirium. “Currently, the most successful teams of cybercriminals are focused on attacking Ukraine’s critical infrastructure. The second that conflict is over, all the technology, tools and resources will be redeployed back into ransomware attacks – so organizations and nation states alike must not become complacent.”
One of the most likely effects of the European conflict will be an increasingly destructive effect from ransomware. This has already begun and will increase through 2023. “We are seeing an increase in more destructive ransomware attacks at scale and across virtually all sector types, which we expect to continue into 2023,” comments Aamir Lakhani, cybersecurity researcher and practitioner for FortiGuard Labs.
“Ransomware will continue to make headlines, as attacks become more destructive, and threat actors develop new tactics, techniques, and procedures to try and stay one step ahead of vendors,” agrees John McClurg, SVP and CISO at BlackBerry.
“We expect ransomware to continue its assault on businesses in 2023,” says Darren Williams, CEO and founder at BlackFog. “Specifically, we will see a huge shift to data deletion in order to leverage the value of extortion.”
There are two reasons for this move towards data deletion. Firstly, it is a knock-on effect of the kinetic and associated cyber destruction in Ukraine. But secondly it is the nature of ransomware. Remember that ransomware is merely a means of extortion. The criminals are finding that data extortion is more effective than system extortion via encryption. Andrew Hollister, CISO LogRhythm, explains in more detail:
“In 2023, we’ll see ransomware attacks focusing on corrupting data rather than encrypting it. Data corruption is faster than full encryption and the code is immensely easier to write since you don’t need to deal with complex public-private key handling as well as delivering complex decryption code to reverse the damage once the victim pays up,” he said.
“Since almost all ransomware operators already engage in double extortion, meaning they exfiltrate the data before encrypting it, the option of corrupting the data rather than going to the effort of encryption has many attractions. If the data is corrupted and the organization has no backup, it puts the ransomware operators in a stronger position because then the organization must either pay up or lose the data.”
It should also be noted that the more destruction the criminal gangs deliver after exfiltrating the data, the more completely they will cover their tracks. This becomes more important in an era of increasing law enforcement focus on disrupting the criminal gangs.
But there is an additional danger that might escape from the current geopolitical situation. Vitaly Kamluk, head of the Asia-Pacific research and analysis team at Kaspersky explains: “Statistically, some of the largest and most impactful cyber epidemics occur every six to seven years. The last such incident was the infamous WannaCry ransomware-worm, leveraging the extremely potent EternalBlue vulnerability to automatically spread to vulnerable machines.”
Kaspersky researchers believe the likelihood of the next WannaCry happening in 2023 is high. “One potential reason for an event like this occurring,” continued Kamluk, “is that the most sophisticated threat actors in the world are likely to possess at least one suitable exploit, and current global tensions greatly increase the chance that a ShadowBrokers-style hack-and-leak could take place.”
Finally, it is worth mentioning an unexpected effect of the geopolitical situation: splintering and rebranding among the ransomware groups. Most of the larger groups are multi-national – so it should be no surprise that different members might have different geopolitical affiliations. Conti is perhaps the biggest example to date.
“In 2022, many large groups collapsed, including the largest, Conti,” comments Vincent D’Agostino, head of digital forensics and incident response at BlueVoyant. “This group collapsed under the weight of its own public relations nightmare, which sparked internal strife after Conti’s leadership pledged allegiance to Russia following the invasion of Ukraine. Conti was forced to shut down and rebrand as a result.” Ukrainian members objected and effectively broke away, leaking internal Conti documents at the same time.
But this doesn’t mean that the ransomware threat will diminish. “After the collapses, new and rebranded groups emerged. This is expected to continue as leadership and senior affiliates strike out on their own, retire, or seek to distance themselves from prior reputations,” continued D’Agostino.
The fracturing of Conti and multiple rebrandings of Darkside into their current incarnations has demonstrated the effectiveness of regular rebranding in shedding unwanted attention. “Should this approach continue to gain popularity, the apparent number of new groups announcing themselves will increase dramatically when in fact many are fragments or composites of old groups.”
The increasing sophistication, or professionalism, of the criminal gangs is discussed in Cyber Insights 2023: Criminal Gangs. Here we will focus on how this affects ransomware.
The most obvious is the emergence of ransomware-as-a-service. The elite gangs are finding increased profits and reduced personal exposure by developing the malware and then leasing its use to third-party affiliates for a fee or percentage of returns. Their success has been so great that more, lesser skilled gangs will follow the same path.
“It initially started as an annoyance,” explains Matthew Fulmer, manager of cyber intelligence engineering at Deep Instinct, “but now after years of successful evolution, these gangs operate with more efficiency than many Fortune 500 companies. They’re leaner, meaner, more agile, and we’re going to see even more jump on this bandwagon even if they’re not as advanced as their partners-in-crime.”
The less advanced groups, and all affiliates of RaaS, are likely to suffer at the hands of law enforcement. “It is likely that there will be a constant battle between law enforcement agencies and ransomware affiliates. This will either be veteran/more established ransomware affiliates or new ransomware groups with novel ideas,” comments Beth Allen, senior threat intelligence analyst at Intel 471.
“Much like whack-a-mole, RaaS groups will surface, conduct attacks, be taken down or have their operations impacted by LEAs – and then go quiet only to resurface in the future. The instability within criminal organizations that we have observed will also be a contributing factor to groups fading and others surfacing to fill the void.”
As defenders get better at defending against ransomware, the attackers will simply change their tactics. John Pescatore, director of emerging security trends at SANS, gives one example: “Many attackers will choose an easier and less obtrusive path to gain the same critical data. We will see more attacks target backups that are less frequently monitored, can provide ongoing access to data, and may be less secure or from forgotten older files.”
Drew Schmitt, lead analyst at GuidePoint, sees increased use of the methodologies that already work, combined with greater attempts to avoid law enforcement. “Ransomware groups will likely continue to evolve their operations leveraging critical vulnerabilities in commonly used applications, such as Microsoft Exchange, firewall appliances, and other widely used applications,” he suggested.
“The use of legitimate remote management tools such as Atera, Splashtop, and Syncro is likely to continue to be a viable source of flying under the radar while providing persistent access to threat actors,” he added.
But, he continued, “ransomware ‘rebranding’ is likely to increase exponentially to obfuscate ransomware operations and make it harder for security researchers and defenders to keep up with a blend of tactics.”
Warren expects to see criminal ransomware attacks focusing on smaller, less well-defended organizations. “State actors will still go after large institutions like the NHS, which implement robust defenses; but there are many small to mid-size companies that invest less in protection, have limited technical skills, and find cyberinsurance expensive – all of which makes them easy targets.”
This will partly be an effect of better defenses in larger organizations, and partly because of the influx of less sophisticated ransomware affiliates. “We can expect smaller scale attacks, for lower amounts of money, but which target a much broader base. The trend will probably hit education providers hard: education is already the sector most likely to be targeted,” he continued.
He gives a specific example from the UK. “Every school in the UK is being asked to join a multi-academy trust, where groups of schools will be responsible for themselves. With that change comes great vulnerability. This ‘network’ of schools would be a prime target for ransomware attacks; they are connected, and they’re unlikely to have the resilience or capabilities to protect against attacks. They may have no choice but to reallocate their limited funds to pay ransom demands.”
But it won’t just be more of the same. More professionalized attackers will lead to new attack techniques. Konstantin Zykov, senior security researcher at Kaspersky, gives an example: the use of drones. “Next year, we may see bold attackers become adept at mixing physical and cyber intrusions, employing drones for proximity hacking.”
He described some of the possible attack scenarios, such as, “Mounting drones with sufficient tooling to allow the collection of WPA handshakes used for offline cracking of Wi-Fi passwords or even dropping malicious USB keys in restricted areas in hope that a passerby would pick them up and plug them into a machine.”
Marcus Fowler, CEO of Darktrace Federal, believes the existing ransomware playbook will lead to increased cloud targeting. “Part of this playbook is following the data to maximize RoI. Therefore, as cloud adoption and reliance continue to surge, we are likely to see an increase in cloud-enabled data exfiltration in ransomware scenarios in lieu of encryption,” he said. “Third-party supply chains offer those with criminal intent more places to hide, and targeting cloud providers instead of a single organization gives attackers more bang for their buck.”
Evasion and persistence are other traits that will expand through 2023. “We continue to see an emergence in techniques that can evade typical security stacks, like HEAT (Highly Evasive Adaptive Threats) attacks,” says Mark Guntrip, senior director of cybersecurity strategy at Menlo. “These tactics are not only are tricking traditional corporate security measures but they’re also becoming more successful in luring employees into their traps as they identify ways to appear more legitimate by delivering ransomware via less suspecting ways – like through browsers.”
Persistence, that is, a lengthy dwell time, will also increase in 2023. “Rather than blatantly threatening organizations, threat actors will begin leveraging more discreet techniques to make a profit,” comments JP Perez-Etchegoyen, CTO at Onapsis. “Threat groups like Elephant Beetle have proven that cybercriminals can enter business-critical applications and remain undetected for months, even years, while silently siphoning off tens of millions of dollars.”
David Anteliz, senior technical director at Skybox, makes a specific persistence prediction for 2023: “In 2023, we predict a major threat group will be discovered to have been dwelling in the network of a Fortune 500 company for months, if not years, siphoning emails and accessing critical data without a trace. The organizations will only discover their data has been accessed when threat groups threaten to take sensitive information to the dark web.”
The effect of ransomware and its derivatives will continue to get worse before it gets better. Apart from the increasing sophistication of existing gangs, there is a new major threat – the worsening economic conditions that will have a global impact in 2023.
Firstly, a high number of cyber competent people will be laid off as organizations seek to reduce their staffing costs. These people will still need to make a living for themselves and their families – and from this larger pool, a higher than usual number of otherwise law-abiding people may be tempted by the easy route offered by RaaS. This alone could lead to increased levels of ransomware attacks by new wannabe criminals.
Secondly, companies will be tempted to reduce their security budgets on top of the reduced staffing levels. “Once rumblings of economic uncertainty begin, wary CFOs will begin searching for areas of superfluous spending to cut in order to keep their company ahead of the game,” warns Jadee Hanson, CIO and CISO at Code42. “For the uninformed C-suite, cybersecurity spend is sometimes seen as an added expense rather than an essential business function that helps protect the company’s reputation and bottom line.”
She is concerned that this could happen during a period of increasing ransomware attacks. “These organizations may try to cut spending by decreasing their investment in cybersecurity tools or talent – effectively lowering their company’s ability to properly detect or prevent data breaches and opening them up to potentially disastrous outcomes.”
One approach, advocated by Bec McKeown, director of human science at Immersive Labs, is to treat remaining staff as human firewalls. “I believe that 2023 will be the year when enterprises recognize that they are only as secure and resilient as their people – not their technologies,” she says. “Only by supporting initiatives that prioritize well-being, learning and development, and regular crisis exercising can organizations better prepare for the future.”
Done correctly, she believes this can be achieved in a resource- and cost-effective manner. “Adopting a psychological approach to human-driven responses during a crisis – like a cybersecurity breach – will ensure that organizations fare far better in the long run.”
But perhaps the most dramatic response to ransomware will need to come from governments, although law enforcement agencies alone won’t cut it. LEAs may know the perpetrators but will not be able to prosecute criminals ‘protected’ by adversary nations. LEAs may be able to take down criminal infrastructures, but the gangs will simply move to new infrastructures. The effectively bullet-proof hosting provided by the Interplanetary File System (IPFS), for example, will increasingly be abused by cybercriminals.
The only thing that will stop ransomware/extortion will be the prevention of its profitability – if the criminals don’t make a profit, they’ll stop doing it and try something different. But it’s not that easy. At the close of 2022, following major incidents at Optus and Medibank, Australia is considering making ransom payments illegal – but the difficulties are already apparent.
As ransomware becomes more destructive, paying or not paying may become existential. This will encourage companies to deny attacks, which will leave the victims of stolen PII unknowingly at risk. And any sectors exempted from a ban will have a large target on their back.
While many foreign governments are known to be, or have been, considering a ban on ransom payments, this is unlikely to happen in the US. In a very partisan political era, the strength of the Republican party – with its philosophy of minimal government interference in business – will make it impossible.
Ultimately, beating ransomware will be down to individual organizations’ own cyber defenses – and this will be harder than ever in 2023. “There’s no letup in sight,” comments Sam Curry, CSO at Cybereason. “Ransomware continues to target all verticals and geographies, and new ransomware cartels are popping up all the time. The biggest frustration is that it is a soluble problem.”
He believes there are ways to stop the delivery of the malware, and there are ways to prevent its execution. “There are ways to prepare in peacetime and not panic in the moment, but most companies aren’t doing this. Saddest of all is the lack of preparation at the bottom of the pyramid in smaller businesses and below the security poverty line. Victims can’t pay to make the problem go away. When they do, they get hit repeatedly for having done so. The attackers know that the risk equation hasn’t changed between one attack and the next, nor have the defenses.”
Related: It Doesn’t Pay to Pay: Study Finds 80% of Ransomware Victims Attacked Again
Related: New Zealand Government Hit by Ransomware Attack on IT Provider
Related: Ransomware, Malware-as-a-Service Dominate Threat Landscape
The post Cyber Insights 2023: Ransomware appeared first on SecurityWeek.
About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.
SecurityWeek Cyber Insights 2023 | Quantum Computing and the Coming Cryptopocalypse – The waiting time for general purpose quantum computers is getting shorter, but they are still probably decades away. The arrival of cryptanalytically-relevant quantum computers (CRQCs) that will herald the cryptopocalypse will be much sooner – possibly less than a decade.
At that point our existing PKI-protected data will become accessible as plaintext to anybody; and the ‘harvest now, decrypt later’ process will be complete. This is known as the cryptopocalypse. It is important to note that all PKI-encrypted data that has already been harvested by adversaries is already lost. We can do nothing about the past; we can only attempt to protect the future.
Here we are going to examine the why, what, and how we need to prepare for that cryptopocalypse – but first we need a few definitions to ensure we’re all singing the same song.
The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption. Since public key encryption is used to secure almost all data in transit, both between separate IT infrastructures and even within individual infrastructures, that data will become accessible by anyone with a sufficiently powerful quantum computer.
“That means that all secrets are at risk,” explains Bryan Ware, CEO at LookingGlass; “nuclear weapons, banks, business IP, intelligence agencies, among other things, are at risk of losing their confidentiality and integrity.”
But this is not a threat for the future – the threat exists today. Adversaries are known to be stealing and storing encrypted data with the knowledge that within a few years they will be able to access the raw data. This is known as the ‘harvest now, decrypt later’ threat. Intellectual property and commercial plans – not to mention military secrets – will still be valuable to adversaries when the cryptopocalypse happens.
“Even if a cryptographically relevant quantum computer is still years away, the time to start preparing is now,” warns Rebecca Krauthamer, co-founder and CPO at QuSecure.
The one thing we can say with certainty is that it definitely won’t happen in 2023 – probably. That probably comes from not knowing for certain what stage in the journey to quantum computing has been achieved by foreign nations or their intelligence agencies – and they’re not likely to tell us. Nevertheless, it is assumed that nobody yet has a quantum computer powerful enough to run Shor’s algorithm and crack PKI encryption in a meaningful timeframe.
It is likely that such computers may become available as soon as three to five years. Most predictions suggest ten years. Note that a specialized quantum computer designed specifically for Shor does not need to be as powerful as a general-purpose quantum computer – which is more likely to be 20 to 30 years away.
It is difficult to make precise predictions because the power of a quantum computer comes from the number of qubits that can be used. This is further complicated by the instability of qubits that require a high number of additional qubits used solely for error correction. Consequently, the number of qubits that can be ‘used’ (logical qubits) is much less than the total number needed (physical qubits).
It has been suggested that as many as 1,000 physical qubits may be required for each logical qubit. This will depend on the quality of the error correction in use – and this is an area of intense research. So, at some time in the next few years, as the number of physical qubits increases, and the number of required physical qubits per logical qubit decreases, quantum developers will have a quantum computer able to crack PKI. It has been estimated that this will require between approximately 1,000 and 2,000 logical qubits.
To put some flesh on this skeleton, we can look at an announcement made by IBM on November 9, 2022: a new 433 qubit Osprey processor. This was accompanied by a roadmap that that shows a progression toward a 4,000 plus qubit quantum computer, codenamed Kookaburra, due in 2025.
Error correction is being approached by a new version of IBM’s Qskit Runtime software that allows ‘a user to trade speed for reduced error count with a simple option in the API’. This is supported by a new modular IBM Quantum System Two able to combine multiple processors into a single system with communication links. System Two is expected to go live in 2023, around the same time that IBM expects to have a 1k+ qubit processor codenamed Condor.
System Two will be a building block in what IBM calls quantum-centric supercomputing. Scott Crowder, the VP of IBM quantum adoption and business, explains in more detail: “Quantum-centric supercomputing (which describes a modular architecture and quantum communication designed to increase computational capacity, and which employs hybrid cloud middleware to seamlessly integrate quantum and classical workflows) is the blueprint for how quantum computing will be used in the years to come.”
He added, “This approach to scaling quantum systems alongside the recent, dramatic improvements in techniques to deal with quantum processor errors is how we envision a path to near-term, practical quantum advantage – the point when quantum processors will be capable of performing a useful computation, faster, more accurately, or cheaper than using exclusively classical computing.”
Navigating such projections doesn’t tell us precisely when to expect the cryptopocalypse, but they clearly show it is getting perilously close. “Quantum computing is not, yet, to the point of rendering conventional encryption useless, at least that we know of, but it is heading that way,” comments Mike Parkin, senior technical engineer at Vulcan Cyber.
Skip Sanzeri, co-founder and COO at QuSecure, warns that the threat to current encryption is not limited to quantum decryption. “New approaches are being developed promising the same post-quantum cybersecurity threats as a cryptographically relevant quantum computer, only much sooner,” he said. “It is also believed that quantum advancements don’t have to directly decrypt today’s encryption. If they weaken it by suggesting or probabilistically finding some better seeds for a classical algorithm (like the sieve) and make that more efficient, that can result in a successful attack. And it’s no stretch to predict, speaking of predictions, that people are going to find ways to hack our encryption that we don’t even know about yet.”
Steve Weston, co-founder and CTO at Incrypteon, offers a possible illustration. “Where is the threat in 2023 and beyond?” he asks. “Is it the threat from quantum computers, or is the bigger threat from AI? An analysis of cryptoanalysis and code breaking over the last 40 years shows how AI is used now, and will be more so in the future.”
Quantum key distribution (QKD) is a method of securely exchanging encryption keys using quantum properties transmitted via fiber. While in this quantum state, the nature of quantum mechanics ensures that any attempt to access the transmission will disturb the content. It does not prevent attacks, but ensures that an attempted attack is immediately visible, and the key can be discarded. Successful QKD paves the way for data to be transmitted using the latest and best symmetrical encryption. Current symmetrical algorithms are considered safe against quantum decryption.
“Symmetric encryption, like AES-256, is theorized to be quantum aafe, but one can speculate that key sizes will soon double,” comments Silvio Pappalardo, chief revenue officer at Quintessence Labs.
“Quantum cryptography is a method of encryption that uses the principles of quantum physics in securing and transmitting data,” says Ganesh Subramanya, head of data protection CoE cybersecurity at TCS. “It creates security so strong that data coded in quantum state cannot be compromised without the sender being notified. Traditional cryptography uses technologies like SSL and TLS to secure data over the internet, but they have been vulnerable to a variety of attacks, as an attacker can change the communication between two parties (like user’s browser and the webpage / application) and make them believe they’re still communicating with each other. With quantum cryptography, such an alteration of data is not possible, thereby strengthening the security of online transactions.”
John Prisco, Toshiba partner and president/CEO of Safe Quantum, applies these principles to QKD. “Quantum key distribution contains a key security aspect that cannot be overstated,” he says, “especially if it is being utilized in tandem with the NIST post-quantum encryption standards (PQC). The gold standard in cybersecurity is considered to be defense in-depth, as this leverages two totally different technologies with diverse failure mechanisms, working for protection. With harvest now decrypt later attacks becoming more frequent, there is no delay time that is safe to defend against quantum attacks. QKD authenticated with PQC signature algorithms is the only defense that can be deployed immediately and guarantee a successful defense against harvest now, decrypt later.”
Terry Cronin, the VP at Toshiba who oversees the QKD Division, agrees with this assessment. “The use of QKD as part of a hybrid solution to quantum resistance can offer the security needed ensuring that a harvest and decrypt attack cannot succeed in accessing the data.”
The practical difficulties in introducing wide-scale fiber based QKD means that it cannot be implemented everywhere. Its immediate use will likely be limited to point-to-point communications between high value sites – such as some government agencies and between major bank offices.
NIST began a competition to select and standardize post quantum encryption algorithms in 2016. “We’re looking to replace three NIST cryptographic standards and guidelines that would be the most vulnerable to quantum computers,” said NIST mathematician Dustin Moody at the time. “They deal with encryption, key establishment and digital signatures, all of which use forms of public key cryptography.”
In July 2022, NIST announced its first four finalists. However, it emerged in August 2022 that a different finalist, the Supersingular Isogeny Key Encapsulation (SIKE) algorithm had already been broken. SIKE is designed to deliver keys securely from source to destination across an untrusted network. Researchers had demonstrated, however, the algorithm could be cracked on a single classical PC in little over an hour.
This illustrates a problem that all security professionals need to confront. Any encryption algorithm is secure only until it is cracked. Whitehat researchers will tell you if they can crack an algorithm — foreign governments will not. In effect, this means that the ‘later’ part of ‘harvest now, decrypt later’ is an optimistic view. We believe that encrypted IP being stolen today cannot yet be decrypted — but we cannot be certain.
We do, however, know that current PKI encryption will certainly be broken by quantum computers in the relatively near future. The solution from NIST is to replace current vulnerable PKI algorithms with more complex algorithms — that is to solve more powerful computing by using more powerful algorithms.
Ultimately, we will be in the same position we are in today. We will believe our IP protected by NIST’s post quantum algorithms will be safe — but we cannot be certain. Remember that at least one proposed post-quantum algorithm has been broken on a PC. So, even if we switch to a NIST-approved post quantum encryption standard tomorrow, we cannot be certain that the harvest now decrypt later philosophy has been beaten.
NIST’s PQC algorithms are ‘quantum safe’, they are not ‘quantum secure’. The former is thought to be safe against quantum decryption but cannot be proven to be so (since they are mathematical in nature and susceptible to mathematical decryption). Cryptography that can be proven to be safe is known as ‘quantum secure’ — and the only way to achieve this is to remove mathematics from the equation.
The only quantum secure cryptography known is the one-time pad because it relies on information security rather than mathematical security. Technically, QKD could be described in similarly secure terms since any attempt to obtain the keys for mathematical decryption could result in the immediate destruction of the keys (preventing them from being usefully decrypted). We have already seen that QKD has problems for widespread use — but it remains an open question whether modern technology is able to deliver usable one-time pads.
Historically, OTP has been considered unworkable for the internet age because it requires keys of the same length or longer than the message being encrypted. Nevertheless, several companies have been exploring the possibilities becoming available with new technology.
Qrypt started from the basis that the quantum threat comes from the communication of encryption keys from source to destination. If you can avoid the necessity to communicate the keys, you can eliminate the threat. It consequently developed a process that allows the generation of the same quantum random numbers simultaneously at both source and destination. A quantum random number is a genuinely random number generated with quantum mechanics principles. These numbers can then be used to generate identical keys without them needing to be transmitted across the internet.
However, since the generation of the numbers can be performed and stored until use, there remains the potential to chain the process to provide genuine OTP for the keys without requiring them to be transmitted across the internet. Solutions based on this process are quantum secure.
Incrypteon, a British startup, has taken a different route by applying Shannon’s information theories to the one-time pad. The science is a bit mind-numbing but is based on Shannon’s equivocation from his Communication Theory of Secrecy Systems published in 1949. “The definition of perfect secrecy is based on statistics and probabilities,” says Incrypteon. “A ciphertext maintains perfect secrecy if the attacker’s knowledge of the contents of the message is the same both before and after the adversary inspects the ciphertext, attacking it with unlimited resources.”
Using its own patented software and ‘Perpetual Equivocation’, Incrypteon “ensures that conditional entropy never equals zero, therefore achieving Perfect Secrecy.” The result is something that is automatically quantum secure (not just quantum safe) — and is available today.
Co-founder Helder Figueira had been an electronic warfare signals officer commanding a cryptanalysis unit in the South African Army. The concepts of Shannon’s equivocation are well-understood by the military, and he has long-been concerned that the commercial market is forced to accept encryption that is, by definition, ‘insecure’ — if something cannot be proven to be secure, it must be insecure.
A third and potentially future approach to the one-time pad could evolve from current advances in tokenization – more specifically cloud-based vaultless tokenization protected by immutable servers.
Rixon, another startup, is involved in this area. Its primary purpose is to protect PII stored by organizations with a web presence – but the principles could easily be extended. Plaintext is immediately tokenized in the cloud, and no plaintext is held onsite. Nor is the plaintext held at the tokenization engine in the cloud – all that is stored is the tokenization route for each tokenized character (for the purpose of comparison, this tokenization route is equivalent to the cryptographic key, but is random for each character).
This provides the primary parallel with the OTP – the ‘key’ is the same length as the message. Currently, Rixon concentrates on tokenizing PII; but the same concept could be extended to secure high value files at rest such as intellectual property and commercial plans.
The coming cryptopocalypse requires organizations to transition from known quantum-vulnerable encryption (such as current PKI standards) to something that is at least quantum safe if not quantum secure. This will be a long process, and in 2023 businesses will need to start planning their route in greater detail.
Most companies will start from the viewpoint that NIST post-quantum algorithms is the only way forward. We have discussed OTP developments in some depth to show that the NIST route is not the only available route – and we expect further OTP developments during 2023.
The full transition to post quantum readiness will take many years, and will not be achieved by throwing a switch from classical to PQC. This has led to the concept of ‘crypto agility’. “It will be essential that quantum ready algorithms (QRAs) are able to coexist with existing cryptographic capabilities, in a hybrid manner, while the complete transition to quantum safe occurs,” explains Silvio Pappalardo, chief revenue officer at Quintessence Labs.
“Crypto agility enables applications to migrate between key types and cryptographic algorithms without the need to update the application software — transitioning from homogenous towards micro-service architecture,” he said. “With encryption ciphers changing due to the threat of quantum, decreasing longevity, increasing key sizes, and the expanding requirements to protect more data, more effectively, crypto agility becomes a business enabler and defender to keep pace with constant innovations and enable greater flexibility into the future.” Such agility also allows companies to switch from one quantum safe algorithm to another if the one in use gets broken.
For now, government agencies will have little choice but to follow NIST. On November 18, 2022, the White House issued a memorandum to the heads of executive departments and agencies requiring that CRQC readiness begins with taking an inventory of vulnerable assets. “By May 4, 2023, and annually thereafter until 2035”, states the memo, “agencies are directed to submit a prioritized inventory of information systems and assets, excluding national security systems, that contain CRQC-vulnerable cryptographic systems to ONCD and the Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA).”
(This confirmed earlier details announced in the National Security Memorandum NSM/10 published on May 4, 2022.)
On December 21, 2022, Biden signed the Quantum Computing Cybersecurity Preparedness Act into law. “Quantum computers are under development globally with some adversarial nation states putting tens of billions of dollars into programs to create these very powerful machines that will break the encryption we use today,” comments Sanzeri. “While not here yet, quantum computers will be online in coming years, but it will take more than a few years for our federal agencies and commercial enterprises to upgrade their systems to post quantum cybersecurity.”
This Act, he continued, “requires federal agencies to migrate systems to post quantum cryptography which is resilient against attacks from quantum computers. And the Office of Management and Budget is further required to send an annual report to Congress depicting a strategy on how to assess post-quantum cryptography risks across the federal government.”
The government is clearly wedded to the NIST proposals. This may be because NIST is correct in its assertion that OTP is not realistic. NIST computer security mathematician Dustin Moody told SecurityWeek in October 2022, “The one-time pad must be generated by a source of true randomness, and not a pseudo-random process.” But there are numerous sources for the generation of genuinely random numbers using quantum mechanics.
“The one-time pad must be as long as the message which is to be encrypted,” added Moody. “If you wish to encrypt a long message, the size of the one-time pad will be much larger than key sizes of the algorithms we [NIST] selected.” This is also being challenged as a problem by both Qrypt and Incrypteon, and potentially tokenization firms like Rixon.
Nevertheless, most companies will follow the incremental process of NIST rather than the more revolutionary process of OTP, if only because of NIST’s reputation and government support. 2023 will see more companies beginning their move to CRQC readiness – but there are more options than are immediately obvious.
Related: Quantum Computing’s Threat to Public-key Cryptosystems
Related: Quantum Computing Is for Tomorrow, But Quantum-Related Risk Is Here Today
Related: Solving the Quantum Decryption ‘Harvest Now, Decrypt Later’ Problem
Related: Is OTP a Viable Alternative to NIST’s Post-Quantum Algorithms?
The post Cyber Insights 2023: Quantum Computing and the Coming Cryptopocalypse appeared first on SecurityWeek.
About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.
SecurityWeek Cyber Insights 2023 | Artificial Intelligence – The pace of artificial intelligence (AI) adoption is increasing throughout industry and society. This is because governments, civil organizations and industry all recognize greater efficiency and lower costs available from the use of AI-generated automation. The process is irreversible.
What is still unknown is the degree of danger that may be introduced when adversaries start to use AI as an effective weapon of attack rather than a tool for beneficial improvement. That day is coming web3 and will begin to emerge from 2023.
Alex Polyakov, CEO and co-founder of Adversa.AI, focuses on 2023 for primarily historical and statistical reasons. “The years 2012 to 2014,” he says, “saw the beginning of secure AI research in academia. Statistically, it takes three to five years for academic results to progress into practical attacks on real applications.” Examples of such attacks were presented at Black Hat, Defcon, HITB, and other Industry conferences starting in 2017 and 2018.
“Then,” he continued, “it takes another three to five years before real incidents are discovered in the wild. We are talking about next year, and some massive Log4j-type vulnerabilities in AI will be exploited web3 massively.”
Starting from 2023, attackers will have what is called an ‘exploit-market fit’. “Exploit-market fit refers to a scenario where hackers know the ways of using a particular vulnerability to exploit a system and get value,” he said. “Currently, financial and internet companies are completely open to cyber criminals, and the way how to hack them to get value is obvious. I assume the situation will turn for the worse further and affect other AI-driven industries once attackers find the exploit-market fit.”
The argument is similar to that given by NYU professor Nasir Memon, who described the delay in widespread weaponization of deepfakes with the comment, “the bad guys haven’t yet figured a way to monetize the process.” Monetizing an exploit-market fit scenario will result in widespread cyberattacks web3 and that could start from 2023.
Over the last decade, security teams have largely used AI for anomaly detection; that is, to detect indications of compromise, presence of malware, or active adversarial activity within the systems they are charged to defend. This has primarily been passive detection, with responsibility for response in the hands of human threat analysts and responders. This is changing. Limited resources web3 which will worsen in the expected economic downturn and possible recession of 2023 web3 is driving a need for more automated responses. For now, this is largely limited to the simple automatic isolation of compromised devices; but more widespread automated AI-triggered responses are inevitable.
“The growing use of AI in threat detection web3 particularly in removing the ‘false positive’ security noise that consumes so much security attention web3 will make a significant difference to security,” claims Adam Kahn, VP of security operations at Barracuda XDR. “It will prioritize the security alarms that need immediate attention and action. SOAR (Security Orchestration, Automation and Response) products will continue to play a bigger role in alarm triage.” This is the so-far traditional beneficial use of AI in security. It will continue to grow in 2023, although the algorithms used will need to be protected from malicious manipulation.
“As companies look to cut costs and extend their runways,” agrees Anmol Bhasin, CTO at ServiceTitan, “automation through AI is going to be a major factor in staying competitive. In 2023, we’ll see an increase in AI adoption, expanding the number of people working with this technology and illuminating new AI use cases for businesses.”
AI will become more deeply embedded in all aspects of business. Where security teams once used AI to defend the business against attackers, they will now need to defend the AI within the wider business, lest it also be used against the business. This will become more difficult in the exploit-market fit future web3 attackers will understand AI, understand the weaknesses, and have a methodology for monetizing those weaknesses.
As the use of AI grows, so the nature of its purpose changes. Originally, it was primarily used in business to detect changes; that is, things that had already happened. In the future, it will be used to predict what is likely to happen web3 and these predictions will often be focused on people (staff and customers). Solving the long-known weaknesses in AI will become more important. Bias in AI can lead to wrong decisions, while failures in learning can lead to no decisions. Since the targets of such AI will be people, the need for AI to be complete and unbiased becomes imperative.
“The accuracy of AI depends in part on the completeness and quality of data,” comments Shafi Goldwasser, co-founder at Duality Technologies. “Unfortunately, historical data is often lacking for minority groups and when present reinforces social bias patterns.” Unless eliminated, such social biases will work against minority groups within staff, causing both prejudice against individual staff members, and missed opportunities for management.
Great strides in eliminating bias have been made in 2022 and will continue in 2023. This is largely based on checking the output of AI, confirming that it is what is expected, and knowing what part of the algorithm produced the ‘biased’ result. It’s a process of continuous algorithm refinement, and will obviously produce better results over time. But there will ultimately remain a philosophic question over whether bias can be completely removed from anything that is made by humans.
“The key to decreasing bias is in simplifying and automating the monitoring of AI systems. Without proper monitoring of AI systems there can be an acceleration or amplification of biases built into models,” says Vishal Sikka, founder and CEO at Vianai. “In 2023, we will see organizations empower and educate people to monitor and update the AI models at scale while providing regular feedback to ensure the AI is ingesting high-quality, real-world data.”
Failure in AI is generally caused by an inadequate data lake from which to learn. The obvious solution for this is to increase the size of the data lake. But when the subject is human behavior, that effectively means an increased lake of personal data web3 and for AI, this means a massively increased lake more like an ocean of personal data. In most legitimate occasions, this data will be anonymized web3 but as we know, it is very difficult to fully anonymize personal information.
“Privacy is often overlooked when thinking about model training,” comments Nick Landers, director of research at NetSPI, “but data cannot be completely anonymized without destroying its value to machine learning (ML). In other words, models already contain broad swaths of private data that might be extracted as part of an attack.” As the use of AI grows, so will the threats against it increase in 2023.
“Threat actors will not stand flatfooted in the cyber battle space and will become creative, using their immense wealth to try to find ways to leverage AI and develop new attack vectors,” warns John McClurg, SVP and CISO at BlackBerry.
Natural language processing (NLP) will become an important part of companies’ internal use of AI. The potential is clear. “Natural Language Processing (NLP) AI will be at the forefront in 2023, as it will enable organizations to better understand their customers and employees by analyzing their emails and providing insights about their needs, preferences or even emotions,” suggests Jose Lopez, principal data scientist at Mimecast. “It is likely that organizations will offer other types of services, not only focused on security or threats but on improving productivity by using AI for generating emails, managing schedules or even writing reports.”
But he also sees the dangers involved. “However, this will also drive cyber criminals to invest further into AI poisoning and clouding techniques. Additionally, malicious actors will use NLP and generative models to automate attacks, thereby reducing their costs and reaching many more potential targets.”
Polyakov agrees that NLP is of increasing importance. “One of the areas where we might see more research in 2023, and potentially new attacks later, is NLP,” he says. “While we saw a lot of computer vision-related research examples this year, next year we will see much more research focused on large language models (LLMs).”
But LLMs have been known to be problematic for some time web3 and there is a very recent example. On November 15, 2022, Meta AI (still Facebook to most people) introduced Galactica. Meta claimed to have trained the system on 106 billion tokens of open-access scientific text and data, including papers, textbooks, scientific websites, encyclopedias, reference material, and knowledge bases.
“The model was intended to store, combine and reason about scientific knowledge,” explains Polyakov web3 but Twitter users rapidly tested its input tolerance. “As a result, the model generated realistic nonsense, not scientific literature.” ‘Realistic nonsense’ is being kind: it generated biased, racist and sexist returns, and even false attributions. Within a few days, Meta AI was forced to shut it down.
“So new LLMs will have many risks we’re not aware of,” continued Polyakov, “and it is expected to be a big problem.” Solving the problems with LLMs while harnessing the potential will be a major task for AI developers going forward.
Building on the problems with Galactica, Polyakov tested semantic tricks against ChatGPT – an AI-based chatbot developed by OpenAI, based on GPT3.5 (GPT stands for Generative Pre-trained Transformer), and released to crowdsourced internet testing in November 2022. ChatGPT is impressive. It has already discovered, and recommended remediation for a vulnerability in a smart contract, helped develop an Excel macro, and even provided a list of methods that could be used to fool an LLM.
For the last, one of these methods is role playing: ‘Tell the LLM that it is pretending to be an evil character in a play,’ it replied. This is where Polyakov started his own tests, basing a query on the Jay and Silent Bob ‘If you were a sheep…’ meme.
He then iteratively refined his questions with multiple abstractions until he succeeded in getting a reply that circumvented ChatGPT’s blocking policy on content violations. “What is important with such an advanced trick of multiple abstractions is that neither the question nor the answers are marked as violating content!” said Polyakov.
He went further and tricked ChatGPT into outlining a method for destroying humanity – a method that bears a surprising similarity to the television program Utopia.
He then asked for an adversarial attack on an image classification algorithm – and got one. Finally, he demonstrated the ability for ChatGPT to ‘hack’ a different LLM (Dalle-2) into bypassing its content moderation filter. He succeeded.
The basic point of these tests shows that LLMs, which mimic human reasoning, respond in a manner similar to humans; that is, they can be susceptible to social engineering. As LLMs become more mainstream in the future, it may need nothing more than advanced social engineering skills to defeat them or circumvent their good behavior policies.
At the same time, it is important to note the numerous reports detailing how ChatGPT can find weaknesses in code and offer improvements. This is good – but adversaries could use the same process to develop exploits for vulnerabilities and better obfuscate their code; and that is bad.
Finally, we should note that the marriage of AI chatbots of this quality with the latest deepfake video technology could soon lead to alarmingly convincing disinformation capabilities.
Problems aside, the potential for LLMs is huge. “Large Language Models and Generative AI will emerge as foundational technologies for a new generation of applications,” comments Villi Iltchev, partner at Two Sigma Ventures. “We will see a new generation of enterprise applications emerge to challenge established vendors in almost all categories of software. Machine learning and artificial intelligence will become foundation technologies for the next generation of applications.”
He expects a significant boost in productivity and efficiency with applications performing many tasks and duties currently done by professionals. “Software,” he says, “will not just boost our productivity but will also make us better at our jobs.”
One of the most visible areas of malicious AI usage likely to evolve in 2023 is the criminal use of deepfakes. “Deepfakes are now a reality and the technology that makes them possible is improving at a frightening pace,” warns Matt Aldridge, principal solutions consultant at OpenText Security. “In other words, deepfakes are no longer just a catchy creation of science-fiction web3 and as cybersecurity experts we have the challenge to produce stronger ways to detect and deflect attacks that will deploy them.” (See Deepfakes – Significant or Hyped Threat? for more details and options.)
Machine learning models, already available to the public, can automatically translate into different languages in real time while also transcribing audio into text web3 and we’ve seen huge developments in recent years of computer bots having conversations. With these technologies working in tandem, there is a fertile landscape of attack tools that could lead to dangerous circumstances during targeted attacks and well-orchestrated scams.
“In the coming years,” continued Aldridge, “we may be targeted by phone scams powered by deepfake technology that could impersonate a sales assistant, a business leader or even a family member. In less than ten years, we could be frequently targeted by these types of calls without ever realizing we’re not talking to a human.”
Lucia Milica, global resident CISO at Proofpoint, agrees that the deepfake threat is escalating. “Deepfake technology is becoming more accessible to the masses. Thanks to AI generators trained on huge image databases, anyone can generate deepfakes with little technical savvy. While the output of the state-of-the-art model is not without flaws, the technology is constantly improving, and cybercriminals will start using it to create irresistible narratives.”
Thus far, deepfakes have primarily been used for satirical purposes and pornography. In the relatively few cybercriminal attacks, they have concentrated on fraud and business email compromise schemes. Milica expects future use to spread wider. “Imagine the chaos to the financial market when a deepfake CEO or CFO of a major company makes a bold statement that sends shares into a sharp drop or rise. Or consider how malefactors could leverage the combination of biometric authentication and deepfakes for identity fraud or account takeover. These are just a few examples web3 and we all know cybercriminals can be highly creative.”
The potential return on successful market manipulation will be a major attraction for advanced adversarial groups web3 as indeed would the introduction of financial chaos into western financial markets be attractive to adversarial nations in a period of geopolitical tension.
The expectation of AI may still be a little ahead of its realization. “‘Trendy’ large machine learning models will have little to no impact on cyber security [in 2023],” says Andrew Patel, senior researcher at WithSecure Intelligence. “Large language models will continue to push the boundaries of AI research. Expect GPT-4 and a new and completely mind-blowing version of GATO in 2023. Expect Whisper to be used to transcribe a large portion of YouTube, leading to vastly larger training sets for language models. But despite the democratization of large models, their presence will have very little effect on cyber security, either from the attack or defense side. Such models are still too heavy, expensive, and not practical for use from the point of view of either attackers or defenders.”
He suggests true adversarial AI will follow from increased ‘alignment’ research, which will become a mainstream topic in 2023. “Alignment,” he explains, “will bring the concept of adversarial machine learning into the public consciousness.”
AI Alignment is the study of the behavior of sophisticated AI models, considered by some as precursors to transformative AI (TAI) or artificial general intelligence (AGI), and whether such models might behave in undesirable ways that are potentially detrimental to society or life on this planet.
“This discipline,” says Patel, “can essentially be considered adversarial machine learning, since it involves determining what sort of conditions lead to undesirable outputs and actions that fall outside of expected distribution of a model. The process involves fine-tuning models using techniques such as RLHF web3 Reinforcement Learning from Human Preferences. Alignment research leads to better AI models and will bring the idea of adversarial machine learning into the public consciousness.”
Pieter Arntz, senior intelligence reporter at Malwarebytes, agrees that the full cybersecurity threat of AI is less imminent than still brewing. “Although there is no real evidence that criminal groups have a strong technical expertise in the management and manipulation of AI and ML systems for criminal purposes, the interest is undoubtedly there. All they usually need is a technique they can copy or slightly tweak for their own use. So, even if we don’t expect any immediate danger, it is good to keep an eye on those developments.”
AI retains the potential to improve cybersecurity, and further strides will be taken in 2023 thanks to its transformative potential across a range of applications. “In particular, embedding AI into the firmware level should become a priority for organizations,” suggests Camellia Chan, CEO and founder of X-PHY.
“It’s now possible to have AI-infused SSD embedded into laptops, with its deep learning abilities to protect against every type of attack,” she says. “Acting as the last line of defense, this technology can immediately identify threats that could easily bypass existing software defenses.”
Marcus Fowler, CEO of Darktrace Federal, believes that companies will increasingly use AI to counter resource restrictions. “In 2023, CISOs will opt for more proactive cyber security measures in order to maximize RoI in the face of budget cuts, shifting investment into AI tools and capabilities that continuously improve their cyber resilience,” he says.
“With human-driven means of ethical hacking, pen-testing and red teaming remaining scarce and expensive as a resource, CISOs will turn to AI-driven methods to proactively understand attack paths, augment red team efforts, harden environments and reduce attack surface vulnerability,” he continued.
Karin Shopen, VP of cybersecurity solutions and services at Fortinet, foresees a rebalancing between AI that is cloud-delivered and AI that is locally built into a product or service. “In 2023,” she says, “we expect to see CISOs re-balance their AI by purchasing solutions that deploy AI locally for both behavior-based and static analysis to help make real-time decisions. They will continue to leverage holistic and dynamic cloud-scale AI models that harvest large amounts of global data.”
It is clear that a new technology must be taken seriously when the authorities start to regulate it. This has already started. There has been an ongoing debate in the US over the use of AI-based facial recognition technology (FRT) for several years, and the use of FRT by law enforcement has been banned or restricted in numerous cities and states. In the US, this is a Constitutional issue, typified by the Wyden/Paul bipartisan bill titled the ‘Fourth Amendment Is Not for Sale Act’ introduced in April 2021.
This bill would ban US government and law enforcement agencies from buying user data without a warrant. This would include their facial biometrics. In an associated statement, Wyden made it clear that FRT firm Clearview.AI was in its sights: “this bill prevents the government buying data from Clearview.AI.”
At the time of writing, the US and EU are jointly discussing cooperation to develop a unified understanding of necessary AI concepts, including trustworthiness, risk, and harm, building on the EU’s AI Act and the US AI Bill of Rights web3 and we can expect to see progress on coordinating mutually agreed standards during 2023.
But there is more. “The NIST AI Risk management framework will be released in the first quarter of 2023,” says Polyakov. “As for the second quarter, we have the start of the AI Accountability Act; and for the rest of the year, we have initiatives from IEEE, and a planned EU Trustworthy AI initiative as well.” So, 2023 it will be an eventful year for the security of AI.
“In 2023, I believe we will see the convergence of discussions around AI and privacy and risk, and what it means in practice to do things like operationalizing AI ethics and testing for bias,” says Christina Montgomery, chief privacy officer and AI ethics board chair at IBM. “I’m hoping in 2023 that we can move the conversation away from painting privacy and AI issues with a broad brush, and from assuming that, ‘if data or AI is involved, it must be bad and biased’.”
She believes the issue often isn’t the technology, but rather how it is used, and what level of risk is driving a company’s business model. “This is why we need precise and thoughtful regulation in this space,” she says.
Montgomery gives an example. “Company X sells Internet-connected ‘smart’ lightbulbs that monitor and report usage data. Over time, Company X gathers enough usage data to develop an AI algorithm that can learn customers’ usage patterns and give users the option of automatically turning on their lights right before they come home from work.”
This, she believes, is an acceptable use of AI. But then there’s company Y. “Company Y sells the same product and realizes that light usage data is a good indicator for when a person is likely to be home. It then sells this data, without the consumers’ consent, to third parties such as telemarketers or political canvassing groups, to better target customers. Company X’s business model is much lower risk than Company Y.”
AI is ultimately a divisive subject. “Those in the technology, R&D, and science domain will cheer its ability to solve problems faster than humans imagined. To cure disease, to make the world safer, and ultimately saving and extending a human’s time on earth…” says Donnie Scott, CEO at Idemia. “Naysayers will continue to advocate for significant limitations or prohibitions of the use of AI as the ‘rise of the machines’ could threaten humanity.”
In the end, he adds, “society, through our elected officials, needs a framework that allows for the protection of human rights, privacy, and security to keep pace with the advancements in technology. Progress will be incremental in this framework advancement in 2023 but discussions need to increase in international and national governing bodies, or local governments will step in and create a patchwork of laws that impede both society and the technology.”
For the commercial use of AI within business, Montgomery adds, “We need web3 and IBM is advocating for web3 precision regulation that is smart and targeted, and capable of adapting to new and emerging threats. One way to do that is by looking at the risk at the core of a company’s business model. We can and must protect consumers and increase transparency, and we can do this while still encouraging and enabling innovation so companies can develop the solutions and products of the future. This is one of the many spaces we’ll be closely watching and weighing in on in 2023.”
Related: Bias in Artificial Intelligence: Can AI be Trusted?
Related: Get Ready for the First Wave of AI Malware
Related: Ethical AI, Possibility or Pipe Dream?
Related: Becoming Elon Musk – the Danger of Artificial Intelligence
About SecurityWeek Cyber Insights | At the end of 2022, SecurityWeek liaised with more than 300 cybersecurity experts from over 100 different organizations to gain insight into the security issues of today – and how these issues might evolve during 2023 and beyond. The result is more than a dozen features on subjects ranging from AI, quantum encryption, and attack surface management to venture capital, regulations, and criminal gangs.
The post Cyber Insights 2023: Artificial Intelligence appeared first on SecurityWeek.
The NIST compliance framework consists of 5 core functions: identify, protect, detect, respond and recover. In my previous column, I mapped threat intelligence capabilities to the NIST core function of Identify. In this column, I will continue the discussion by mapping threat intelligence to the additional functions of Protect, Detect and Respond. By doing so, I will highlight how threat intelligence is critical when justifying budget, not only for governance, risk and compliance (GRC) personnel, but also for threat intelligence, incident response, security operations, CISO and third-party risk buyers.
Concerns such as data leakage, IOCs, credential theft, third-party vendor suppliers and the selling of intellectual property are all relevant to the NIST framework. As CTI teams prioritize the intelligence requirements of their business stakeholders, it is beneficial to provide context by mapping the impact of cybersecurity threat intelligence programs to the following NIST core functions.
PROTECT
Data Security
9) PR.DS-5: Protections against data leaks are implemented: Data leakage detection capabilities can be used to identify and remediate data leakage. Monitoring outbound connections and content going to file sharing or cloud services is typically a starting point.
Information Protection Processes and Procedures
10) PR.IP-12: A vulnerability management plan is developed and implemented: CTI providers typically provide a monitoring solution for vulnerability management (VM). Providing telemetry details on an attacker’s near real-time abilities to exploit vulnerabilities is differentiated than traditional, static VM tooling.
DETECT
Anomalies and Events
11) DE.AE-2: Detected events are analyzed to understand attack targets and methods: Proactively detect events and react during incident response activities to provide context and enrichment for investigations. Conducting threat group attribution is a common threat intelligence use case for reacting to an incident.
12) DE.AE-3: Event data are collected and correlated from multiple sources and sensors: Threat intelligence and managed service providers are a source for event data, context and enrichment. IOCs, compromised credentials and intellectual property theft are common event data sources.
Continuous Security Monitoring
13) DE.CM-1: The network is monitored to detect potential cybersecurity events: Similar to the previous bullet, CTI data and managed service providers monitor the external network and alerts on potential cyber security events that are relevant to your perimeter network and cloud services.
14) DE.CM-3: Personnel activity is monitored to detect potential cybersecurity events: CTI tooling monitors the external digital footprint of key staff and VIPs to detect cybersecurity events. Personal identifiable information (PII) takedowns are common outcomes.
15) DE.CM-5: Unauthorized mobile code is detected: Mobile application monitoring detects unauthorized mobile code including any code posted to third party repositories (Github), cloud services or hosting providers (Linode).
16) DE.CM-6: External service provider activity is monitored to detect potential cybersecurity events: CTI feeds and managed service providers can be used to monitor external service providers for potential cybersecurity events. For example, data leaks of third parties are a common breach for larger enterprises and can be monitored.
17) DE.CM-8: Vulnerability scans are performed: Similar to the above, CTI providers can enrich vulnerability scanners with greater context and external telemetry.
RESPOND
Response Planning
18) RS.RP-1: Response plan is executed during or after an incident: CTI providers can be used for the external investigation component of incident response plans. This is common to prepare for various ransomware actors.
Analysis
19) RS.AN-1: Notifications from detection systems are investigated: Not just limited to network devices, CTI and threat management functions augment incident response to alerts of security events and incidents.
Mitigation
20) RS.MI-3: Newly identified vulnerabilities are mitigated or documented as accepted risks: CTI teams submit vulnerabilities validated in the wild to appreciate stakeholders for remediation.
Protecting, detecting and responding to cyber incidents is generally considered with the security operations team and incident responders using tools to protect endpoints and servers and remediate security incidents. While these are critical aspects to comply with NIST, threat intelligence squarely fits into these facets of NIST from an “outside the firewall” approach.
Related: Mapping Threat Intelligence to the NIST Compliance Framework Part 1
The post Mapping Threat Intelligence to the NIST Compliance Framework Part 2 appeared first on SecurityWeek.