artificial intelligence - All Articles - CISO Platform2024-03-28T22:55:57Zhttps://www.cisoplatform.com/profiles/blogs/feed/tag/artificial+intelligenceResearchers Give Birth to the First GenAI Wormhttps://www.cisoplatform.com/profiles/blogs/researchers-give-birth-to-the-first-genai-worm2024-03-04T19:12:36.000Z2024-03-04T19:12:36.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><img src="https://storage.ning.com/topology/rest/1.0/file/get/12395860277?profile=RESIZE_400x&width=400"></div><div><p class="graf graf--p">It was bound to happen — researchers have created a 1st generation AI worm that can steal data, propagate malware, and spread via email.</p><p class="graf graf--p">Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Bitton from Intuit created the self-replicating worm and bestowed the name ‘Morris II’ after the notorious worm that infected systems in the 1980’s. Their creation targets AI apps and AI-enabled email assistants. They published a <a class="markup--anchor markup--p-anchor" href="https://sites.google.com/view/compromptmized" target="_blank">research paper</a> and video showing methods to steal data and affect others email systems.</p><p class="graf graf--p">This worm basically embeds adversarial type data into malicious email that manipulates victim’s systems to propagate messages, perform malicious activity, and exfiltrates sensitive data.</p><p class="graf graf--p">Strategically, the crux of this evolving problem is based in the fact that the pursuit of more functionality and subsequent value of GenAI and LLM systems, they require more access and permissions to do things in the digital ecosystems they inhabit. So, they become an incredibly powerful tool for both good and also for bad if instructed by malicious parties.</p><p class="graf graf--p">So, take a breath! This is just the beginning!</p><p class="graf graf--p">We must all understand that we can seize the great benefits of disruptive technologies, like Artificial Intelligence, but we must also be responsible to proactively understand and mitigate the accompanying cybersecurity risks!</p></div>Lacking Practicality - Executive Order for Safe, Secure, and Trustworthy AIhttps://www.cisoplatform.com/profiles/blogs/lacking-practicality-executive-order-for-safe-secure-and-trustwor2023-10-30T18:55:12.000Z2023-10-30T18:55:12.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><img src="https://storage.ning.com/topology/rest/1.0/file/get/12271494871?profile=RESIZE_400x&width=400"></div><div><p id="f139" class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">The White House just released an <a class="af ks" href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank">Executive Order</a> intended to lay down some standards intended to manage the risks of Artificial Intelligence. I absolutely like the idea of establishing guardrails to make AI safe, secure, and trustworthy, but I am unsure that the concepts will manifest into something meaningful.</p><p class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">It appears that the authors have a simplistic view of AI, which if true, can be easily managed. However, AI is more an adaptable set of tools and capabilities. It is not a specific machine or device. It is equivalent to an edict requiring the Internet to be safe, secure, and trustworthy. Great in concept, but shortsighted in the actual complexity to achieve and sustain.</p><p class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">For example, there is a requirement for AI-generated content to be watermarked, to protect from fraud and deception. We can’t do this well in the real world, much less the digital one. If we could do this, spam and phishing would not be a problem. In the Generative AI world, every time a new tool or process has emerged to watermark content or detect fakes, it has been undermined in a very short period.</p><p id="ff44" class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">In general, the document is filled with mostly ‘don’t use AI for bad’ concepts, but not actual structures to govern, control, or penalize non-compliant practices.</p><p class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">At a high level, there is much good in this Executive Order, as it draws attention to key areas that we must manage, including security standards for AI implementation in Critical Infrastructure sectors. The order supports a long-needed national data privacy law that unifies the collage of confusing and inconsistent state rules. It offers guidance for many ways how the government can or should use AI.</p><p class="pw-post-body-paragraph xy xz ug nd b ya yb yc yd ye yf yg yh mn yi yj yk ms yl ym yn mx yo yp yq yr jg bj">These are great areas to pursue, but the rapid evolution and adoption of AI greatly limits our practical visibility and capabilities in how best to establish meaningful guardrails. The result will likely be similar to what has been seen in the past, ineffective standards, with government regulations that are outdated by the time they are defined, and the development community several steps ahead in whatever they want to accomplish.</p></div>AI and Cybersecurity - Cybersecurityhttps://www.cisoplatform.com/profiles/blogs/ai-and-cybersecurity-cybersecurity2023-06-21T18:28:43.000Z2023-06-21T18:28:43.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><img src="https://storage.ning.com/topology/rest/1.0/file/get/12105729856?profile=RESIZE_400x&width=400"></div><div><p style="text-align:center;"><iframe title="YouTube video player" src="https://www.youtube.com/embed/cmBfGO5AxNM" width="560" height="315" frameborder="0" allowfullscreen=""></iframe></p><p>Meetup with Richard Stiennon and Matthew Rosenquist</p><p>The hottest topic of 2023 - Artificial Intelligence. Richard Stiennon and I discuss the relevance and how Large Language Models (LLMs), like ChatGPT, are adding innovation to the use of AI in the cybersecurity domain.</p><p> </p></div>5 Performance Testing Trends Expected To Be Witnessed In 2023https://www.cisoplatform.com/profiles/blogs/5-performance-testing-trends-expected-to-be-witnessed-in-20232023-03-06T04:47:05.000Z2023-03-06T04:47:05.000ZRay Parkerhttps://www.cisoplatform.com/members/RayParker<div><img src="https://storage.ning.com/topology/rest/1.0/file/get/10993173282?profile=RESIZE_400x&width=400"></div><div><p><span style="font-weight:400;">Performance testing examines an app’s capability, speed, scalability, and responsiveness under a particular quantity of workload. Indeed though it's an important aspect of icing that the software’s quality is over to the mark, numerous businesses give it a stepmotherly treatment. It's frequently conducted only after functional testing is completed, and occasionally, only after the program is released.</span></p><p><span style="font-weight:400;">There are several objects for performance testing computing processing speed, assaying operation outturn, network consumption, data transfer haste, maximum concurrent transfers, workload effectiveness, memory use, etc. Considered a subset of performance engineering, it's also called Perf Testing.</span></p><p><strong>Testing in product</strong></p><p><span style="font-weight:400;">Before opening the product to the public, it's wise to test it in the product. When you do so, you can expose it to a nanosecond part of the customer base. It helps you find and fix problems incontinently. Some teams perform nonstop delivery which pushes every law change to the product line if it passes automated tests. The new law that's pushed, will only be available for select many inventors internally. A few other plans that are popularly utilized for testing include A/ B split testing, blue-green deploys, and incremental roll-pouts. This is a very important step considered by <a href="https://softwaretestinglead.com/performance-testing-companies-around-the-world/" target="_blank">performance testing companies</a>.</span></p><p><strong>Synthetic Transactions</strong><span style="font-weight:400;"> </span></p><p><span style="font-weight:400;">When you cover the product, you'll get to know how long requests will be live on the garçon, but it'll give you no idea about the client’s experience. Synthetic deals help you understand what a customer goes through as it simulates a real customer.</span></p><p><span style="font-weight:400;">Then’s what a synthetic account will do for a social networking point. The customer can log in, go through their profile, view some of the posts that are uploaded on their feed, talk to ‘ musketeers ’ on the point, add ‘ musketeers ’, and so on.</span></p><p><span style="font-weight:400;">Synthetic accounts can indeed pretend factual orders for eCommerce spots. When businesses track the real customer experience, they stand to get a ton of data and it gives them an idea about issues, detainments, and crimes that guests face. It can also be used to find product problems snappily. It'll help software businesses assess how their operation is used by consumers.</span></p><p><strong>Self- service</strong></p><p><span style="font-weight:400;">Performance is viewed else by people in programming positions, DevOps, and security. The tools that we see these days are customized for each part and indeed allow specialized specialists to use their own set of tools. IT operation specialists will want to see performance data in the same place where they get their work done so that they can take corrective action incontinently. Programmers who can do performance work within their integrated development terrain have bigger chances of keeping performance engineering work according to the development that's passing.</span></p><p><strong>Chaos Testing</strong></p><p><span style="font-weight:400;">Chaos Testing is a largely disciplined methodology to test the integrity of a system where you proactively pretend and identify failures in a terrain before there's any unplanned time-out or a bad customer experience. It involves understanding how the operation will bear when failures are in one or further corridors of the armature. There are several misgivings in the product terrain.</span></p><p><span style="font-weight:400;">The idea of chaos testing is to understand how the system will bear if there are failures. It'll also help understand if there will be any major issues if there are system failures. For illustration, if there's a time-out in one of the web services, the entire structure shouldn't go down. Chaos engineering helps find loopholes in the system before the production process.</span></p><p><strong>Incorporating Artificial Intelligence For Automation Testing</strong></p><p><span style="font-weight:400;">Since client experience changes on a platform performance testing scripts are changed too. By using <a href="https://softdevlead.com/how-is-artificial-intelligence-shaping-our-future/" target="_blank">Artificial Intelligence( AI)</a> and <a href="https://www.techtarget.com/searchenterpriseai/definition/machine-learning-ML" target="_blank">Machine learning ( ML)</a>, the conditioning of the real customer on the platform and the customer trip with their patterns can be exhumed.</span></p><p><span style="font-weight:400;">Using these patterns, it's possible to produce a performance testing model that will make sure your cargo testing scripts match the real experience of the consumers. Performance testing companies always consider this. </span></p><p><span style="font-weight:400;">Creating performance-grounded test models will help businesses find new issues in their testing systems. AI-powered performance testing apps can optimize test suites as it reduces spare test cases and ensures optimal test content by assaying keywords. It can indeed identify unexplored areas in apps. Although artificial intelligence and machine experience haven't yet came a part of regular performance testing practices, we will soon see them gaining traction in chancing out problematic areas.</span></p><p><strong>Conclusion</strong></p><p><span style="font-weight:400;">Performance engineering teams might not be a regular point in all businesses yet, but they'll come a part of the mainstream in the time 2023. For this, there are numerous well-reputed performance testing companies. Customer experience becomes further and further critical to the success of apps. Thus, it becomes the motorist for frequent releases, shorter development cycles, fleetly changing conditions, and so on. Thanks to this, software businesses have a customer-focused approach to quality during each stage of the software development lifecycle. When done right, performance engineering enables software inventors and quality assurance masterminds to make the needed performance criteria from the beginning itself.</span></p></div>Privacy Concerns for Dual-Use AI Image Clarity Toolshttps://www.cisoplatform.com/profiles/blogs/privacy-concerns-for-dual-use-ai-image-clarity-tools2021-12-15T02:27:59.000Z2021-12-15T02:27:59.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><img src="https://storage.ning.com/topology/rest/1.0/file/get/9910218682?profile=RESIZE_400x&width=400"></div><div><p class="graf graf--p">AI tech is a powerful tool. The original photo (left) was cleaned-up with an AI deep learning algorithm (Image source: from <a class="markup--anchor markup--p-anchor" href="https://www.linkedin.com/posts/murilo-gustineli_computervision-deeplearning-artificialintelligence-activity-6874434789815009280-Xow8" target="_blank">Murilo Gustineli</a>) and restoring tremendous clarity.</p><p class="graf graf--p">The AI researchers outline their progress in their white paper Towards Real-World Blind Face Restoration with Generative Facial Prior (<a class="markup--anchor markup--p-anchor" href="https://arxiv.org/pdf/2101.04061" target="_blank">https://arxiv.org/pdf/2101.04061</a>) and code is available for others to try on their project webpage: <a class="markup--anchor markup--p-anchor" href="https://xinntao.github.io/projects/gfpgan" target="_blank">https://xinntao.github.io/projects/gfpgan</a>.</p><p class="graf graf--p">The GFP-GAN system (Generative Facial Prior GFP — Generative Adversarial Network GAN), published by Xintao Wang, Yu Li, and Honglun Zhang and Ying Shan, is able to restore images much better than previous AI systems. The results are nothing short of impressive.</p><p class="graf graf--p"><a href="{{#staticFileLink}}9910228656,original{{/staticFileLink}}"><img class="align-full" src="{{#staticFileLink}}9910228656,RESIZE_710x{{/staticFileLink}}" width="710" alt="9910228656?profile=RESIZE_710x" /></a></p><p class="if ig fy ih b ii ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc dn gv">As a privacy professional, when I see these transformational examples, I have grave concerns about undesired monitoring of the population and the ability to clean-up distant or low-quality surveillance images, to identify and track a population.</p><p class="if ig fy ih b ii ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc dn gv">Digital cameras are widely deployed by businesses and governments. A major limitation is the clarity of images at a distance. It becomes very difficult to positively identify subjects. With AI enhancing image clarity tools, identifying people at great distances or with poor resolution cameras could be automated at scale. That could allow the tracking of people wherever they go, catalog everyone they speak with, and if eventually applied to read-lips, it could eavesdrop on conversations at a distance.</p><p id="d4a4" class="if ig fy ih b ii ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc dn gv">However, you may be shocked to know that I am equally excited as this is also a potentially <strong class="ih fz">PRIVACY ENHANCING</strong> technology! Because this same type of AI can be used to perturb clear images in ways that undermine facial recognition algorithms.</p><p id="aae4" class="if ig fy ih b ii ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc dn gv">Imagine this tech embedded in privacy-supporting cameras that modify the pixels in ways, unnoticeable to the human eye, but thwarts AI systems from conducting bulk identification of people from its video feed. Humans still see unblurred images but automated computer processes are thwarted from harvesting identified personal data at scale. Such a usage could find a potentially desirable balance between security and privacy.</p><p class="if ig fy ih b ii ij ik il im in io ip iq ir is it iu iv iw ix iy iz ja jb jc dn gv">It is up to everyone to decide how such tools will be used.</p></div>There is No Easy Fix to AI Privacy Problemshttps://www.cisoplatform.com/profiles/blogs/there-is-no-easy-fix-to-ai-privacy-problems2020-02-08T21:19:56.000Z2020-02-08T21:19:56.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669829467,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669829467,original{{/staticFileLink}}" class="align-full" alt="8669829467?profile=original" /></a></p><p>Artificial intelligence – more specifically, the machine learning (ML) subset of AI - has a number of privacy problems.</p><p>Not only does ML require vast amounts of data for the training process, but the derived system is also provided with access to even greater volumes of data as part of the inference processing while in operation. These AI systems need to access and “consume” huge amounts of data in order to exist and, in many use cases, the data involved is private: faces, medical records, financial data, location information, biometrics, personal records, and communications. </p><p>Preserving privacy and security in these systems is a great challenge. The problem grows in sensitivity as the public becomes more aware of the consequences of their privacy being violated and misused. Regulations are continually evolving to restrict organizations and penalize offenders who fail to respect users’ rights. British Airways was, for example, recently fined $228 million by the European Union for privacy violations. </p><p>There is currently a fine line that AI developers must walk to create useful systems to benefit society and yet avoid violating privacy rights.</p><p>For example, AI systems are an excellent candidate to help law enforcement rescue abducted and exploited children by identifying them in social media posts. Such a system would be relentless in scouring all posts and matching images to missing persons, even taking into account the likely changes of years passing by, something impossible for humans to accomplish accurately or at scale. However, such a system would need to do facial recognition analysis on every picture posted in a social network. That could identify and ultimately contribute to tracking everyone, even bystanders in the background of images. Sounds creepy and you may likely object. This is where privacy regulations and ethics must define what is allowable. Bringing home kidnapped kids or those who are forced into sex trafficking is very worthwhile but still requires adherence to privacy fundamentals, so greater harms aren’t inevitably created.</p><p>To accomplish such a noble feat, a system would need to be trained to recognize the faces of children. For accuracy, it would require a training database with millions of children’s faces. To follow the laws in some jurisdictions, the parents of each child in the training data set would need to approve the use of their child’s image as part of the learning process. No such approved database currently exists and it would be a tremendous undertaking to build one. It would probably take many decades to coordinate such an effort, leaving the promise of an efficient AI solution for finding kidnapped or exploited children just a hopeful concept for the foreseeable future. </p><p>Such is the dilemma of AI and privacy. This type of conflict arises when AI systems are in training and also when they are put to work to process real data.</p><p>Take that same facial recognition system and connect it to both a federal citizen registry and millions of surveillance cameras. Now, the government could identify and track people wherever they go, regardless if they have committed a crime, which is very Orwellian.</p><p>But innovation is coming to help - federated learning, differential privacy, and homomorphic encryption are technologies that can assist in navigating such challenges. However, they are just tools and not complete solutions. They can help in specific usages but always come with drawbacks and limitations, many of which can be significant. </p><ul><li><strong>Federated learning</strong> (aka collaborative learning) makes possible the training of algorithms without local data sets being exchanged or centralized. It’s all about compartmentalization, which is great for privacy, but it difficult to set up and scale. Additionally, it can be limiting to data researchers that are desperate for massive data sets containing the rich information needed for training AI systems.</li><li><strong>Differential privacy</strong> takes a different approach, attempting to obfuscate the details by providing aggregate information but not sharing specific data, i.e., “describe the forest, but not individual trees”. It is often used in conjunction with federated learning. Again, there are privacy benefits but it can result in serious degradation of accuracy for the AI system, thereby undermining the overall value and purpose.</li><li><strong>Homomorphic encryption</strong>, one of my favorites, is a promising technology that allows for data to remain encrypted yet still allow useful computations to be done as if they were unencrypted. Imagine a class of students being asked who is their favorite teacher: Alice or Bob. To protect the privacy of the answers, an encrypted database is created containing the names of individual students and the corresponding name of their favorite teacher. While in an encrypted state, calculations could be done, in theory, to tabulate how many votes there were for Alice and for Bob, without actually looking at the individual choices by each student. Applying this to AI development, data privacy remains intact while training can still proceed. Sounds great, but in real-world scenarios, it is extremely limited and takes tremendous computing power to accomplish. For most AI applications it is simply not a feasible way to train the system.</li></ul><p>For now, there is no perfect solution on the horizon. It currently takes the expertise of and committed partnerships between privacy, legal, AI developers, and ethics professionals to evaluate individual use-cases to determine the best course of action. Even then, most of the focus is placed only on current concerns and not on applying a more difficult strategic viewpoint of what challenges will emerge in the future. The only thing that is clear is that we need to achieve the right level of privacy so we can benefit from the tremendous advantages that AI potentially holds for mankind. How that is achieved in an effective, efficient, timely, and consistent manner is beyond what anyone has figured out to date.</p><p> </p><p></p><p>Image by <a href="https://pixabay.com/users/Computerizer-4588466/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2301646">Computerizer</a> from <a href="https://pixabay.com/?utm_source=link-attribution&utm_medium=referral&utm_campaign=image&utm_content=2301646">Pixabay</a></p><p>Originally published on HelpNetSecurity <a href="https://www.helpnetsecurity.com/2020/01/23/ai-privacy-problems">https://www.helpnetsecurity.com/2020/01/23/ai-privacy-problems</a></p><p><a href="{{#staticFileLink}}8669829467,original{{/staticFileLink}}" target="_blank"></a></p></div>The Entanglement of AI and Cybersecurity Podcasthttps://www.cisoplatform.com/profiles/blogs/the-entanglement-of-ai-and-cybersecurity-podcast2020-03-10T05:04:19.000Z2020-03-10T05:04:19.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669832066,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669832066,original{{/staticFileLink}}" class="align-full" alt="8669832066?profile=original" /></a></p><p>The boundaries of cybersecurity will be manipulated by the advances in Artificial Intelligence, the evolution of digital threats, and on ever-adapting leadership. </p><p> <iframe width="560" height="315" src="https://www.youtube.com/embed/c6FO2RZjm5E?wmode=opaque" frameborder="0" allowfullscreen=""></iframe></p><p>I had a great time being interviewed by Vaishali Lambe [Lisha] in her podcast <a href="https://www.vaishalilambe.com/soleadsaturday-podcast-1">SoLeadSaturday</a> because we talked about how cybersecurity and AI are intertwined, how leadership is crucial, and the fact that technology tools are being used for both good and malicious purposes. The growing demands for a security-savvy workforce led us to explore the vast opportunities in the field. Emerging factors magnify the interesting swirls of competing challenges. To cap the discussion, we visualized the future of the industry and discussed the risks. </p><p>I provided insights to those interested in joining the cybersecurity professional community. Success requires a more inclusive and diverse workforce which includes a higher emphasis on the participation of women and underrepresented minorities. </p><p><a href="{{#staticFileLink}}8669832066,original{{/staticFileLink}}" target="_blank"></a><a href="{{#staticFileLink}}8669832087,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669832087,original{{/staticFileLink}}" class="align-full" alt="8669832087?profile=original" /></a><a href="{{#staticFileLink}}8669832066,original{{/staticFileLink}}" target="_blank"></a></p><p>Overall a great discussion. I had a lot of fun and look forward to future talks with Lisha!</p><p> </p><p>AI and cybersecurity are growing together and the future is still largely unknown. Where do you think the risks and opportunities will emerge? Let me know what you think in the comments below or follow me on <a href="https://www.linkedin.com/today/author/matthewrosenquist">LinkedIn</a> or <a href="https://medium.com/@matthew.rosenquist">Medium</a> and share your thoughts there. </p><p> </p><p> </p><p>#infosec #infosecurity #artificialintelligence #ai #cybersecurityjobs #cybersecuritystrategy #SoLeadSaturday #cybersecurity #leadership</p></div>AI and Cybersecurity Awareness Podcast - Cyber Risk Leaders Tell Allhttps://www.cisoplatform.com/profiles/blogs/ai-and-cybersecurity-awareness-podcast-cyber-risk-leaders-tell2020-06-02T19:14:38.000Z2020-06-02T19:14:38.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p>How will AI change the strategies of cybersecurity? Where will we see the first big impacts of attackers using AI? </p><p></p><p>Watch the Cyber Risk Leaders podcast... </p><p style="text-align:center;"><iframe width="560" height="315" src="https://www.youtube.com/embed/Pu2EgD-cmc4?wmode=opaque" frameborder="0" allowfullscreen=""></iframe></p><p>Shamane Tan and Carmen Marsh were wonderful hosts. I had a fantastic time talking about AI and cybersecurity in the Cyber Risk Leaders podcast. Additionally, Jonathan Hiroshi Rossi explains the success of organizing 10x Cybersecurity Awareness Tour across 30 countries and the value of educating all types of audiences about cybersecurity.</p><p> </p><p>Podcast: 'Cyber Risk Leaders' Tell All @ The Global Virtual Book Club EP 2: <a href="https://www.youtube.com/watch?v=Pu2EgD-cmc4">https://www.youtube.com/watch?v=Pu2EgD-cmc4</a></p></div>Killer Drones to be Available on the Global Arms Marketshttps://www.cisoplatform.com/profiles/blogs/killer-drones-to-be-available-on-the-global-arms-markets2020-06-24T20:37:37.000Z2020-06-24T20:37:37.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669838867,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669838867,original{{/staticFileLink}}" class="align-full" alt="8669838867?profile=original" /></a></p><p>Turkey may be the first customer for the Kargu series of weaponized suicide drones specifically developed for military use. These semi-autonomous devices have been in development since 2017 and will eventually be upgraded to operate collectively as an autonomous swarm to conduct mass synchronized attacks. </p><p><iframe width="560" height="315" src="https://www.youtube.com/embed/Oqv9yaPLhEk?wmode=opaque" frameborder="0" allowfullscreen=""></iframe></p><p>This situation has been building for some time and I have been ringing the warning bell for years. Sadly, this is just the beginning of the development arc for these types of weapon systems. As better sensors, enhanced range, greater speed, cleverer AI, and greater payloads become available, we will see all manner of new usages and specializations.</p><p>Back when the airplane was first developed and used in WWI, they started as reconnaissance platforms, replacing the very limited and vulnerable dirigibles. Once they shifted to an offensive role, bombing and strafing ground targets, the interceptors emerged to counter the threat. By WWII, we had a massive range of different specialized aircraft for air superiority, interdiction, strategic bombing, and defense which evolved so fast they were unrecognizable as compared to their WWI origins. We are faced with the same future when it comes to autonomous drones. </p><p>Imagine the next generation minefield where drones lay dormant until sensors detect a target then pop up and pursue. How about <a href="https://www.youtube.com/watch?v=9CO6M2HsoIA" target="_blank">slaughter-bot</a> variants, that are programmed to target specific groups of people and work as part of a mesh network to saturate an area with hunter behaviors. Such weapons could redefine guerrilla and low-intensity warfare. Forget about buried improvised explosive devices (IED), which have been the bane of coalition forces over the past few years. Those were deployed by attackers with the hope a target would happen to wander by and come close enough to be attacked. These drones will be able to aggressively seek out adversaries, structures, or innocent civilians at range with little to no exposure of the operator.</p><p>Name any nation or warlord that would not embrace such cheap and replaceable devices.</p><p>The defensive technologies to protect against such attacks are still in their nascent phases. Traditional defenses are at a distinct disadvantage. There is much that must be done to establish capabilities, oversight, and limitations that restricts abusive and undesired use of these types of munitions in conflicts that could span the globe. </p><p>These aren’t the only drones under development or in use. But the low cost, small size, single-operator design, swarm design goals, and payload suited to attack people makes for an unnerving combination. As the world’s inventories expand to include weaponized autonomous drones, the need for proper cybersecurity will also increase.</p><p>I have warned governments in the past. They must be sure they have an antidote ready before releasing innovative weapons to the world. That includes viruses, drones, hacking suites, and AI sub-systems that could potentially be weaponized. The rush to deploy new toys often backfires. Adversaries may use the technology and tactics against those who introduced it, their allies, or innocent civilians. Without possessing the proper means of protection, giving the world a new weapon is just asking for trouble.</p><p></p><p>Interested in more? Follow me on <a href="https://www.linkedin.com/today/author/matthewrosenquist" target="_blank">LinkedIn</a>, <a href="https://medium.com/@matthew.rosenquist" target="_blank">Medium</a>, and <a href="https://twitter.com/Matt_Rosenquist" target="_blank">Twitter (@Matt_Rosenquist)</a> to hear insights, rants, and what is going on in cybersecurity.</p></div>Teaching AI to be Evil with Unethical Datahttps://www.cisoplatform.com/profiles/blogs/teaching-ai-to-be-evil-with-unethical-data2020-07-04T17:13:56.000Z2020-07-04T17:13:56.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669836876,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669836876,original{{/staticFileLink}}" class="align-center" alt="8669836876?profile=original" /></a></p><p>An Artificial Intelligence (AI) system is only as good as its training. For AI Machine Learning (ML) and Deep Learning (DL) frameworks, the training data sets are a crucial element that defines how the system will operate. Feed it skewed or biased information and it will create a flawed inference engine. </p><p><a href="https://thenextweb.com/neural/2020/07/01/mit-removes-huge-dataset-that-teaches-ai-systems-to-use-racist-misogynistic-slurs/" target="_blank">MIT recently removed a dataset</a> that has been popular with AI developers. The training set, 80 Million Tiny Images, was scraped from Google in 2008 and used in training AI software to identify objects. It consists of images that are labeled with descriptions. During the learning phase, an AI system will ingest the dataset and ‘learn’ how to classify images. The problem is that many of the images are questionable and the labels were inappropriate. For example, women are described with derogatory terms, body parts are identified with offensive slang, and racial slurs were sometimes used to label minority people. Such training should never be allowed.</p><p>AI developers need vast amounts of training data to train their systems. Collections are often created out of convenience, without consideration for courteous content, copyright restrictions, compliance to licensing agreements, people’s privacy rights, or respect for society. Unfortunately, many of the available sets were haphazardly created by scraping the internet, social sites, copyrighted content, and human interactions without approval or notice. </p><p>Many of the most used training datasets have issues. A large number were created by unethically acquiring content, some contain derogatory or inflammatory information, and for others, the sample is not representative because it excludes certain groups that would benefit from inclusion. </p><p>The problem has become worse over time. Flawed datasets, that were made openly available to the developer community early-on, became so popular that they are now considered a standard. These benchmarks are used to check accuracy and performance across different AI systems and configurations. </p><p>Too few are vetted for inclusion, content, accuracy, or socially acceptable content. Using such flawed records is simply unethical because the resulting systems can be racially charged, biased, and promote inequality. </p><p>We cannot have good AI if the commonly used datasets create unethical systems. All files should be vetted and both the creators and product developers held responsible. Just as chefs are held accountable for the ingredients they put into their prepared dishes, so should the AI community be held responsible for allowing poor data to result in harmful AI systems.</p><p></p><p></p><p>Interested in more? Follow me on <a href="https://www.linkedin.com/today/author/matthewrosenquist" target="_blank">LinkedIn</a>, <a href="https://medium.com/@matthew.rosenquist" target="_blank">Medium</a>, and <a href="https://twitter.com/Matt_Rosenquist" target="_blank">Twitter (@Matt_Rosenquist)</a> to hear insights, rants, and what is going on in cybersecurity.</p><p><a href="{{#staticFileLink}}8669836876,original{{/staticFileLink}}" target="_blank"></a></p></div>Will AI rescue the world from the impending doom of cyber-attacks or be the causehttps://www.cisoplatform.com/profiles/blogs/will-ai-rescue-the-world-from-the-impending-doom-of-cyber-attacks2020-07-07T23:47:17.000Z2020-07-07T23:47:17.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669835891,original{{/staticFileLink}}" target="_blank"><img src="{{#staticFileLink}}8669835891,original{{/staticFileLink}}" class="align-center" alt="8669835891?profile=original" /></a></p><p>There has been a good deal of publicized chatter about <a href="https://www.cpomagazine.com/cyber-security/the-largest-cyber-attack-of-all-time-is-coming-and-ai-could-help-stop-it/">impending cyberattacks at an unprecedented scale</a> and how <a href="https://www.forbes.com/sites/stephenmcbride1/2020/05/14/why-the-largest-cyberattack-in-history-will-happen-within-six-months">Artificial Intelligence (AI) could help stop them</a>. Not surprisingly much of the discussion is led by AI vendors in the cybersecurity space. Although they have a vested interest in raising an alarm, they do have a point. But it is only half the story.</p><p>There is a new ‘largest’ cyber-attack almost every year. Sometimes it is an overwhelming Distributed-Denial-of-Service (DDoS) attack, other times it has been a deeper penetrating worm, more powerful botnet, massive data breach, or a bigger financial heist. This is not unexpected. Rather it is a result of the world embracing Digital Transformation (DT) with more assets and reliance on the growing digital ecosystem. </p><p>Although I do not think there will be some cataclysmic cyber-attack that brings everything down in the foreseeable future, we are likely to experience an ever-increasing rate and impact of attacks. I find the AI discussions to be interesting, not for the arguments for how AI can help, but for what is omitted.</p><p>You see, AI is just a tool. A powerful one which will be used by both attackers and defenders. </p><p>AI can greatly enhance cybersecurity prediction, prevention, detection, and response capabilities to improve defenses, adapt faster to new threats, and lower the overalls cost of security. Attackers are also attracted to AI capabilities because of the very same attributes of speed, scale, automation, and effectiveness that empowers them to relentlessly pursue targets, gain access, seize assets and undermine attempts by security to detect and evict them. AI can be used to attack and undermine other AI systems, which is becoming a problem. <a href="https://securityintelligence.com/articles/why-adversarial-examples-are-such-a-dangerous-threat-to-deep-learning/">Adversarial attacks</a> are one such class of exploitation where the inputs to an AI system are modified by the opposition in such a way that the output is intentionally manipulated. These and other types of offensive systems that undermine AI represent a serious and growing risk to <a href="https://blog.f-secure.com/5-adversarial-ai-attacks-that-show-machines/">consumers</a>, <a href="https://www.militaryaerospace.com/trusted-computing/article/14178908/trusted-computing-artificial-intelligence-ai-information-warfare">militaries</a>, critical infrastructure, and <a href="https://towardsdatascience.com/your-car-may-not-know-when-to-stop-adversarial-attacks-against-autonomous-vehicles-a16df91511f4">transportation</a>.</p><p>Yes, AI can help with the next ‘largest’ attacks, but it is also very likely that AI will be behind those attacks as well. So, let’s have a balanced discussion about the risks that increase every day, for all of us with roots in the digital domain. AI will grow and play a pivotal role in how technology influences the lives of every person on the planet. It will be very important to both cybersecurity and cyber-attackers in how they can maneuver. The game is on and the stakes are high.</p><p>Welcome to the new AI cyber-arms race.</p><p> </p><p> </p><p>Interested in more? Follow me on <a href="https://www.linkedin.com/today/author/matthewrosenquist">LinkedIn</a>, <a href="https://medium.com/@matthew.rosenquist">Medium</a>, and <a href="https://twitter.com/Matt_Rosenquist">Twitter (@Matt_Rosenquist)</a> to hear insights, rants, and what is going on in cybersecurity.</p></div>Cybersecurity Issues and Trends - Interview with CybxSecurityhttps://www.cisoplatform.com/profiles/blogs/cybersecurity-issues-and-trends-interview-with-cybxsecurity2020-03-24T21:45:25.000Z2020-03-24T21:45:25.000ZMatthew Rosenquisthttps://www.cisoplatform.com/members/MatthewRosenquist<div><p><a href="{{#staticFileLink}}8669832266,original{{/staticFileLink}}" target="_blank"><img class="align-full" src="{{#staticFileLink}}8669832266,original{{/staticFileLink}}" alt="8669832266?profile=original" /></a></p>
<p>My recent interview with Mark Byrne, from Cybx Security, covered a great range of cybersecurity questions, including new threats and solutions, Artificial Intelligence, DevSecOps, cybercrime, security impacts of Coronavirus, and the future of cybersecurity. </p>
<p>Excerpt: </p>
<blockquote><strong>One of the most interesting points you made was regarding cyber crime, ‘the next billion cyber criminals’, and economically struggling countries. You also highlighted Ransomware as a service RaaS as being legitimately one of the most alarming threats. Could you elaborate on this?</strong></blockquote>
<blockquote>The internet is adding about <a href="https://wearesocial.com/blog/2019/01/digital-2019-global-internet-use-accelerates">a million new users every day</a>. With modern countries already having most of their citizens online, many of the new users are from economically struggling nations. We often forget that half of the world earns less than $10 a day. It is these new internet users in geographies that have few economic options that will be seeking ways to earn money with their new connection to the global digital ecosystem.</blockquote>
<blockquote>Cybercrime, like <a href="https://medium.com/@matthew.rosenquist/cybersecurity-fights-back-against-ransomware-f566ef5ee648">Ransom-as-a-Service</a> is a perfect fit. It requires no technical knowledge and little to no upfront investment. Participants simply solicit victims to get infected, by opening a file, navigating to a malicious website, or installing a harmful application that installs ransomware. If the victim pays to get access to their encrypted files, the participant receives a percentage of the payment. Although unethical, it can be an economic windfall for people struggling to survive.</blockquote>
<blockquote>The risk we all face is that a percentage of the next billion internet users might willingly become an army of fraudsters for cybercriminals, unless we find a way to undermine the underlying motivations.</blockquote>
<blockquote><em>“Data will remain valuable; therefore, it will continue to be targeted by attackers”</em></blockquote>
<blockquote><strong>The </strong><a href="https://www.bbc.co.uk/news/technology-51799738"><strong>Cambridge Analytica scandal</strong></a><strong> highlighted significant threats to privacy. Do you think this was isolated, or could we potentially see another case like this in future?</strong></blockquote>
<blockquote>The Cambridge Analytica incident is not isolated. Data is the new oil. Every company collects it from customers in some way. Many businesses use it in ways that customers don’t appreciate, including selling it. Data aggregation and analysis is tremendously insightful and therefore big business. More data equates to more power. With new privacy laws and protections, many ethical companies are now downshifting their collection efforts to be more conservative. They are also showing flexibility in how they treat, protect, and share such data. <strong>Data will remain valuable</strong>; therefore, it will continue to be targeted by attackers and misused by unethical organizations to the detriment of society. The battle for privacy is only now beginning and there are many battles ahead.</blockquote>
<p> </p>
<p> </p>
<p>I really enjoyed tackling such insightful and timely questions! The more we communicate, share, and collaborate, the stronger we become to make digital technology secure and trustworthy!</p></div>