Gabe The Engineer's Avatar

Gabe The Engineer

@gdbassett.bsky.social

Current cyber insurance leader. Former lead data scientist @VZDBIR. Co-inventor of Attack Flow. Views are my own.

607 Followers  |  683 Following  |  232 Posts  |  Joined: 28.11.2023  |  2.0834

Latest posts by gdbassett.bsky.social on Bluesky

If you’re at #sfn25 you definitely don’t want to miss this nanosymposium on cilia, tomorrow from 1-4pm! Come learn about neuronal cilia, they do some pretty cool stuff!

15.11.2025 20:26 β€” πŸ‘ 13    πŸ” 10    πŸ’¬ 1    πŸ“Œ 1
Preview
Rust in Android: move fast and fix things Posted by Jeff Vander Stoep, Android Last year, we wrote about why a memory safety strategy that focuses on vulnerability prevention in ...

Rust in Android: move fast and fix things

15.11.2025 16:42 β€” πŸ‘ 0    πŸ” 1    πŸ’¬ 0    πŸ“Œ 0

This is drop dead sexy.

15.11.2025 13:16 β€” πŸ‘ 30    πŸ” 6    πŸ’¬ 3    πŸ“Œ 0

Now is the time to prepare to not set your house on fire this Thanksgiving.

15.11.2025 13:41 β€” πŸ‘ 722    πŸ” 130    πŸ’¬ 28    πŸ“Œ 17
Text in graphic: "Cards Against Humanity generously donated 100% of profits from their limited-edition informational product, 'Cards Against Humanity Explains the Joke,' to the American Library Association. ALA Explains the Donation at ala.org/CAH." Illustration of a Cards Against Humanity box on top of a short stack of books.

Text in graphic: "Cards Against Humanity generously donated 100% of profits from their limited-edition informational product, 'Cards Against Humanity Explains the Joke,' to the American Library Association. ALA Explains the Donation at ala.org/CAH." Illustration of a Cards Against Humanity box on top of a short stack of books.

@cardsagainsthumanity.com, longtime supporter of libraries, organized a creative way to support ALA with presales of a special edition productβ€”Cards Against Humanity Explains the Jokeβ€”during Banned Books Week. CAH will donate 100% of proceeds to ALA.

ALA Explains the Donation: ala.org/CAH

#IGM25

12.11.2025 19:20 β€” πŸ‘ 81    πŸ” 26    πŸ’¬ 2    πŸ“Œ 3

According to signaling theory, some signals must be costly just to be costlyβ€”that's how you get a separating equilibrium. Think peacocks and their oversized feathers. So even if AI removes one costly signal, it doesn't mean we should stop technological progress β€” we'll just find new ones.

14.11.2025 11:52 β€” πŸ‘ 30    πŸ” 9    πŸ’¬ 6    πŸ“Œ 1

Corollary of the 'lemons' analysis is that good candidates drop out because they won't accept the offered pay rate. Not clear if relevant to this labour market, given the bargaining imbalances. But highlights potential for AI to produce widespread informational asymmetries thus 'lemons' problems.

14.11.2025 11:54 β€” πŸ‘ 12    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0
Video thumbnail

Good statement from the US conference of Catholic Bishops

14.11.2025 02:17 β€” πŸ‘ 6620    πŸ” 1980    πŸ’¬ 186    πŸ“Œ 307
Preview
dplyr but make it bussin fr fr no cap `genzplyr` is an alternative syntax for `dplyr` that replaces boring old function names with GenZ slang. Your data wrangling is about to hit different.

"dplyr but make it bussin fr fr no cap"
hadley.github.io/genzplyr/

08.11.2025 10:24 β€” πŸ‘ 23    πŸ” 12    πŸ’¬ 2    πŸ“Œ 1
Post image

For each additional moral–emotional word in a social media post, the number of shares increases 13%

Our new meta-analysis finds robust evidence of moral contagion (N=4,821,006)

The moral contagion effect is even stronger in larger, pre-registered studies (17%).
academic.oup.com/pnasnexus/ar...

05.11.2025 16:58 β€” πŸ‘ 87    πŸ” 38    πŸ’¬ 3    πŸ“Œ 4
Mannys stating that if you come in during the first week of November and show your SNAP card, they will make a family meal for there or to go.

Mannys stating that if you come in during the first week of November and show your SNAP card, they will make a family meal for there or to go.

SUPPORT MANNY'S IF YOU ARE IN CHICAGO.

31.10.2025 23:08 β€” πŸ‘ 1680    πŸ” 543    πŸ’¬ 16    πŸ“Œ 26

New effect size just dropped: The PVPP

01.11.2025 01:16 β€” πŸ‘ 8    πŸ” 3    πŸ’¬ 1    πŸ“Œ 0

So why are basic controls so good? Is it because they have been around the longest so are the most efficient & refined?Because they were the 1st controls & we mitigated the biggest vulnerabilities 1st? Is it that they affect the threat side of the risk equation by limiting threat actor targeting?

09.10.2025 12:06 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0

Why are basic cyber security controls so effective?

There's no inherent reason the complexity of a control should affect its effectiveness. In fact, I'd expect fancier controls to provide better security than basic ones.

09.10.2025 12:05 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

quantitative methods, qualitative methods, mixed methods

07.10.2025 10:00 β€” πŸ‘ 429    πŸ” 86    πŸ’¬ 14    πŸ“Œ 88

The OpenAi preprint on arXiv arxiv.org/pdf/2509.04664

21.09.2025 12:50 β€” πŸ‘ 134    πŸ” 27    πŸ’¬ 1    πŸ“Œ 1
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio

become my colleague at penn bioengineering: apply.interfolio.com/173716

19.09.2025 02:29 β€” πŸ‘ 10    πŸ” 8    πŸ’¬ 1    πŸ“Œ 0

Is there any way to keep Android/chrome from sharing Google links for everything?

I'm trying to share, not tell Google everyone I share with.

15.09.2025 01:30 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 0    πŸ“Œ 0
Preview
Transmission networks of long-term and short-term knowledge in a foraging society Abstract. Cultural transmission across generations is key to cumulative cultural evolution. While several mechanismsβ€”such as vertical, horizontal, and obli

πŸ’™New paper!πŸ’™

How is knowledge transmitted across generations in a foraging society?

With @danielredhead.bsky.social
we found: In BaYaka foragers, long-term skills pass in smaller, sparser networks, while short-term food info circulates broadly & reciprocally

academic.oup.com/pnasnexus/ar...

14.09.2025 07:52 β€” πŸ‘ 162    πŸ” 66    πŸ’¬ 4    πŸ“Œ 5

And how!

10.09.2025 05:15 β€” πŸ‘ 41    πŸ” 8    πŸ’¬ 4    πŸ“Œ 0
LightScope - See Your Scanners

youtu.be/VzHcWZcqIA0

Dutch Waterfall scans coming out of the Netherlands, how you can tell over 1,400 IPs are working together, and novel temporal fingerprinting/visualization for scan traffic!

My PhD research.

lightscope.isi.edu

11.09.2025 22:35 β€” πŸ‘ 2    πŸ” 3    πŸ’¬ 0    πŸ“Œ 0

But, I need to trust my computer to store my credentials (which I didn't have to before).

Honestly, it's a password manager where I don't get to pick the password.

Well, a whole bunch of different password managers.

11.09.2025 23:55 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

Ultimately it's "do you trust you or do you trust your device more?" And probably in the future, "do you trust your genAI model?" I suppose for most folks, we do trust the device more. I still trust myself more though I think.

11.09.2025 15:45 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0

I've been torn on passkeys but couldn't explain why. I think I've got it now though.

It feels like we're saying "We can't trust you to give your password. So we've given your password to your device who we trust both to identify you and identify who's getting your password."

11.09.2025 15:43 β€” πŸ‘ 0    πŸ” 0    πŸ’¬ 1    πŸ“Œ 0
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users β€” in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues.

Abstract: Under the banner of progress, products have been uncritically adopted or even imposed on users β€” in past centuries with tobacco and combustion engines, and in the 21st with social media. For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not considered a valid position to reject AI technologies in our teaching and research. This is why in June 2025, we co-authored an Open Letter calling on our employers to reverse and rethink their stance on uncritically adopting AI technologies. In this position piece, we expound on why universities must take their role seriously toa) counter the technology industry’s marketing, hype, and harm; and to b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity. We include pointers to relevant work to further inform our colleagues.

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI (black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf. Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al. 2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA).

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.

Protecting the Ecosystem of Human Knowledge: Five Principles

Protecting the Ecosystem of Human Knowledge: Five Principles

Finally! 🀩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n

06.09.2025 08:13 β€” πŸ‘ 3365    πŸ” 1721    πŸ’¬ 102    πŸ“Œ 311

I am fascinated by this guy who was a Higher Ed administrator and has now moved into teaching classes as a faculty member. He is documenting his whole journey on TikTok. Over the summer, he had so much excitement 🧡

05.09.2025 12:03 β€” πŸ‘ 971    πŸ” 257    πŸ’¬ 18    πŸ“Œ 91

needlework is great because you get to stab things

30.08.2025 22:27 β€” πŸ‘ 172    πŸ” 10    πŸ’¬ 9    πŸ“Œ 0

Another librarian is fighting back. Let’s help her.

28.08.2025 01:11 β€” πŸ‘ 48    πŸ” 25    πŸ’¬ 0    πŸ“Œ 0

Asparagus makes EVERYONE's urine smell weird, but NOT EVERYONE can smell that smell. For years, scientists thought that asparagus made only SOME PEOPLE's urine smell, because some folks reported that it didn't. This is an example of why it's so important to ASK THE RIGHT QUESTION.

19.08.2025 01:03 β€” πŸ‘ 373    πŸ” 51    πŸ’¬ 13    πŸ“Œ 10
Preview
Bookmobiles Bring Food, Internet, And Reading Materials Bookmobiles may seem old-school, but they are still around and better than ever, offering new and unique services to connect with users.

Check out historical ways some bookmobiles have traveled: watercraft, train, carts pulled by horses, donkeys, elephants, and even camels!

15.08.2025 02:01 β€” πŸ‘ 40    πŸ” 15    πŸ’¬ 1    πŸ“Œ 1

@gdbassett is following 20 prominent accounts