๐ Q: What does โprecisionโ refer to in AI?
A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct
V/ VISO
#ai #aisecurity
@cleartech.bsky.social
Engineer who helps clients scope, source and vet solutions in #Cloud #cloudsecurity #aisecurity| #ai|Tech analyst| VP Cloud and Security|USAF vet| ๐Learning from CIOs and CISOs on the daily| ๐ of NY Times Spelling ๐ Linked In: LinkedIn.com|in|JoPeterson1
๐ Q: What does โprecisionโ refer to in AI?
A: In AI, "precision" refers to a metric that measures how many of a model's positive predictions are actually correct
V/ VISO
#ai #aisecurity
๐ Q: What is prompt injection in Agentic AI?
A: In agentic AI, "prompt injection" refers to a security vulnerability where a malicious user manipulates the input prompt given to an AI system, essentially "injecting" harmful instructions to trick AI
v/ Cisco.bsky.social
#aisecurity #agenticai
๐ Q: How can data cleaning boost AI model accuracy?
A: Data cleaning is crucial in AI because the quality of data directly impacts the accuracy and reliability of AI models
#datacleaning #ai
๐ Q: Can Large Language Models (LLMs) alter data?
A: Yes, LLMs (Large Language Models) can indirectly alter data by generating new information or modifying existing data based on the prompts and context provided.
v/ @nexla.bsky.social
#aisecurity #ai
๐ Q: How do you restrict access to Large Language Models (LLMs)?
A: To restrict access to LLMs, implement access controls like (RBAC), multi-factor authentication (MFA), and user authentication systems, limiting who can interact with LLM
v/ Exabeam.bsky.social
#aisecurity #cloudai #cyberai
๐ Q: Can #AI models get stuck?
A: Yes, AI models can essentially get "stuck" in a state where they repeatedly generate similar outputs or fail to learn effectively
v/ Infobip
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: How do you secure private AI?
A: To secure private AI, you need:
โ
strict access controls
โ
data encryption
โ
model watermarking
โ
secure network infrastructure
โ
data anonymization,
โ
robust privacy policies,
โ
regular security audits
v/ Surgere
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What is a false positive in AI?
A: A "false positive" in AI refers to when an AI system incorrectly identifies something as belonging to a specific category, like flagging human-written text as AI-generated
v/ Originality.ai
#cloud #cloudsecurity #cybersecurity #aisecurity
Sign-up for the AskWoody Newsletter and read Deanna's latest article: "Back to BASICs โ Hello, World! " A look at the favorite BASIC programming language versions.
www.askwoody.com/2025/back-to...
#programming #Learntocode #Computing @AskWoody
๐ Q: What is an AI ๐ค privacy issue?
A: An AI privacy issue refers to the potential for artificial intelligence systems to violate personal privacy by collecting, storing, and analyzing personal data without user knowledge, consent, or control
v/ IBM
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What is over privilege in an AI ๐ค system?
A: โOver privilege" in an AI system refers to a situation where an AI model or component has been granted excessive access to data or functionalities
v/ @oneidentity.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: How does agentic AI handle inputs?
A: Agentic AI handles inputs by autonomously processing information from various sources, including environmental data, user interactions, and internal knowledge bases
v/ @ibm.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What are common data leak vulnerabilities in LLMs?
A:
โ
Incomplete or improper filtering of sensitive information
โ
Overfitting or memorization of sensitive data
โ
Unintended disclosure of confidential information
v/ OWASPยฎ Foundation
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: Whatโs the difference between public and private AI?
A: Public AI operates on hyperscale cloud-based platforms and is accessible to multiple businesses
Private AI is tailored and confined to a specific organisation.
v/ ComputerWeekly.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
๐ Thank you Engati for naming me to 40 LinkedIn Top Voices in Tech for 2025
Itโs an honor and Iโm in amazing company
Read: ๐ www.engati.com/blog/linkedi...
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What is a walled garden approach in AI?
A: A "walled garden" approach in AI refers to a closed ecosystem where a single entity controls all aspects of an AI system
v/ Iterate.ai
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What is an unknown threat in AI security?
A: An "unknown threat" in AI security refers to a cyber threat that hasn't been previously identified or documented, meaning it lacks a known signature
v/ @zscaler.bsky.social
#cloud #cloudsecurity #aisecurity
๐ BrightTALK 's Cloud Cover has hit the 9K follower mark!
Interested in knowing where the Platform as a Service (PaaS) space is headed?
When: 2/12
Time: 12PM EST
๐ Register here: www.brighttalk.com/webcast/1998...
๐๏ธ Subscribe: www.brighttalk.com/channel/19985
#cloud #cloudsecurity #aisecurity
๐ Q: What is AI model collapse?
A: AI model collapse is a process where generative AI models trained on AI-generated data begin to perform poorly.
v/ @appinventiv.bsky.social
#cloud #cloudsecurity #cybersecurity #aisecurity
๐ Q: What is Adaptive authentication in AI security?
A: Adaptive authentication in AI security is a dynamic authentication method that uses machine learning and contextual data to assess the risk of a login attempt
v/ OneLogin by One Identity
#cloud #cloudsecurity #cloudai #aisecurity
๐ Q: What is adversarial machine learning?
A: Adversarial machine learning (AML) is a technique that uses malicious inputs to trick or mislead a machine learning (ML) model.
v/ @crowdstrike.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai
Thank you ๐
27.01.2025 15:55 โ ๐ 0 ๐ 0 ๐ฌ 0 ๐ 0๐กHappy to announce that Iโve been invited to participate in the AI Safety Executive Leadership Council for the Cloud Security Alliance
#cloud #cloudsecurity #cloudai #cybersecurity #aisecurity
๐ Q: How often should you refresh your cybersecurity policy?
A: A cybersecurity policy should be refreshed at least once a year
v/ @carbide.bsky.social
#cloud #cloudsecurity #cybersecurity #cloudai #aisecurity
๐ Q: What is an insider threat in AI security?
A: An "insider threat" in AI security refers to a situation where someone with authorized access to an organization's AI systems misuses that access to harm the organization
v/@vectraai.bsky.social
#cloud #cloudsecurity #cloudai #aisecurity
๐ Get up to speed with CMMC--Join us for a Webinar--January 28th, 1PM EST
As a IT or Security leader, will your business be impacted by the new CMMC rules?
Join Trustwave and Clarify360 for a webinar
When: 01/28/2025
Time: 1PM EST
๐ Register here: www.eventbrite.com/e/accelerate...
๐ Q: What is confabulation on the part of a Large Language Model (LLM)?
A: Confabulation on the part of a Large Language Model (LLM) is the generation of output that is not based on real-world input or information
v/@owasp.bsky.social
#cloud #cloudsecurity #cloudai #cyberai #ai
๐ Q: What are backdoor attacks?
A: Backdoor attacks are a type of cybersecurity threat that involves creating a hidden entry point into a system or network that can be exploited by an attacker to gain unauthorized access.
v/ @nightfallai.bsky.social
#cloud #cloudsecurity #cloudai #ai
๐ Guess what y'all? Dan Sรถdergren has included a quote from me in his upcoming book--How to โSurvive and Thriveโ In 2025. A Leaderโs Guide to the Times of AI
Sign up for waitlist = Free Book!
โ
Click here--https://www.aileadershipcourse.com/
#ai #aisecurity #aiautomation #aiworkflows
๐ Q: What is model fuzzing in AI?
A: Model Fuzzing is a testing technique used to identify vulnerabilities and weaknesses in machine learning models by inputting random, unexpected, or malformed data to observe how the model responds.
v/ @appsoc
#cloud #cloudsecurity #cloudai #aisecurity