Esben Rasmussen's Avatar

Esben Rasmussen

@esbenrasmussen.bsky.social

Senior #SEO manager specializing in automation, tech and AI. Digital optimizer with a love for metal music ๐Ÿค˜ and a nerdy interest in #data + #webanalytics. Working @ Transact Denmark. I love SEO news - when reported honestly (sorry SEJ for being a pain)

55 Followers  |  216 Following  |  23 Posts  |  Joined: 28.01.2025  |  2.2741

Latest posts by esbenrasmussen.bsky.social on Bluesky

I know this is really geeky but would love to see examples from Google on what a Gbot server load/priority could look like based on request, response header, speed and content fetched.

Not giving away specifics but more so we gain a better understanding of what weighs when "budgeting" for Gbot.

22.09.2025 12:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Love the answer and completely agree!
Thanks.

My curiosity just drove me to looking into whether Google had made public declarations as to what constitutes a crawl/hit in terms of their own processes.

If it is the total load on Gbot infrastructure then why not just have article examplifying this?

22.09.2025 12:27 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Thanks. Yeah, I am trying to get to the bottom of it. It seems they are a part of some scripts - which seem to generate the URLs with unique ID's on each request.

Next up: reaching out to the developers ๐Ÿ˜…

Am I correct: The help docs has no definition of what constitutes a crawl in terms of budget?

22.09.2025 09:12 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Well, my plan is to tell the client to block those URLs with robots.txt.

I have no idea why it is crawlable ๐Ÿ™ˆ

But it just made me think about the crawl budget - and made me wonder if returning no content is still classified as a crawl or not.

22.09.2025 08:28 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Hey @johnmu.com

I just discovered roughly 200K URLs w status 204 (no content) not blocked in robots.txt.

Would a status 204 waste crawl budget or not?

I guess the question is: What IS a crawl (in terms of crawl budget)?

Is it both request, response header and content OR just request + response?

22.09.2025 08:18 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Also my hunch. Would be quite impressive with timetravelling Gbot though!

Is the crawl time reported using PDT timezone (so when I see 21:28 in GSC and I am located in the UK I need to add 8 hours) or using the users timezone?

If PDT, where do I suggest that this is made much clearer in GSC? ๐Ÿ˜Š

23.05.2025 15:27 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Is Googlebot capable of precognition?

Gbot reports a bug that was introduced after Gbot crawl.

Does anyone (perhaps @johnmu.com) know why GSC lists a page as last crawled May 21 21:28 - w user declared canonical showing a bug, that was introduced in a midnight release between the 21. and the 22?

23.05.2025 10:35 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Awesome. Thanks for that. I tried looking up the answer on your long page w. general feature info but this info wasn't listed. Might be worth adding in terms making sure your computer clock is set to the correct time zone ๐Ÿคฆโ€โ™‚๏ธ๐Ÿ˜…

10.03.2025 12:06 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

@screamingfrog.co.uk Quick question regarding scheduled crawls: Does SF use the computer clock for activating the scheduled crawls at the right time or does SF use some other global clock and an internal sync'ed clock?

07.03.2025 08:30 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

How did you identify the need for the content brief in the first place?

Often I would use a combo of 1 and 2 to identify main topics that do not overlap.

Then I would use 2 and create a page strategy matching intent with topic. From that I would build the brief.

06.03.2025 05:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Haha, they could just as well have said:

As famous captain Picard of Star Wars once said: "So long and thanks for all the fish"

But yes, love seeing how IT issues are combined with climate challenges... I guess cloud computing and AI enforces that.

05.03.2025 16:40 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Chart - Showing sharp rise in "Organic traffic".

X/Horizontal Axis : Time (measured in 4-day blocks, early February to early March).
Y/Vertical Axis : Traffic quantity (measured in 400K bars).

Chart shows a flat line until near the end of February, then massive increases over a matter of days (from 0 to over 1.5 Million, in approx. a week).

Chart - Showing sharp rise in "Organic traffic". X/Horizontal Axis : Time (measured in 4-day blocks, early February to early March). Y/Vertical Axis : Traffic quantity (measured in 400K bars). Chart shows a flat line until near the end of February, then massive increases over a matter of days (from 0 to over 1.5 Million, in approx. a week).

.
:: Is Google showing Favouritism? ::

After years (and years) of complaints,
of people showing G examples of weak, bad, spammy, unhelpful, unsatisfactory content,
ranking on "brand sites" (particularly Large/Enterprise Publishers) ...

... G made the #SRA.

>>>

X: x.com/darth_na/sta...

04.03.2025 16:24 โ€” ๐Ÿ‘ 9    ๐Ÿ” 3    ๐Ÿ’ฌ 3    ๐Ÿ“Œ 2

Thanks! Will definitely also be my recommendation.

I also just learned that the server sometimes serves a different variant of the robots.txt file.
Sometimes it includes the line "Disallow: /oplevelser/*$" and sometimes it doesn't.

๐Ÿคฏ

03.03.2025 11:54 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Hi @johnmu.com
Do you know how Gbot would interpret this in robots.txt:

/oplevelser/*$

GSC says it's crawlable when inspecting: www.dailys.dk/oplevelser/m...

technicalseo.com/tools/robots... + screaming Frog says its not due to robots.txt.

Is *$ an invalid combo making Gbot ignore that line?

03.03.2025 11:01 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

๐Ÿ˜‚

13.02.2025 08:23 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Preview
Scammers behind influx of bad URLs in Google Search Console Recently in my work at TRANSACT Denmark I have seen several big websites being "hit" with spam data in Google Search Console, so I thought I would share the issue here - and how a properly setup robot...

I am seeing more big websites being hit by "bad URLs" in Google Search Console which indicates that the problem could be on a rise - perhaps leveraged by AI mass analyzing weak spots on big reputable websites. ๐Ÿ˜ฑ๐Ÿ’ฉ

I wrote an article showing you how to fix it: www.linkedin.com/pulse/scamme...

13.02.2025 08:08 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Awesome! I guess I could have told myself that. ๐Ÿคฆโ€โ™‚๏ธ๐Ÿ˜…
Will check it out.

13.02.2025 06:56 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Sounds really interesting!

Is there any way to watch your TikTok if I do not have TikTok installed (it's a tinfoil hat-thing)?

12.02.2025 21:55 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Interesting!

1) What do you monitor? Is it a certain prompt?

2) GPTs return different answers for each request, so how do evaluate output that can be both plain text, bullets and tables?

11.02.2025 18:02 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

Thanks. Could be I should look into Make at some point to see if it makes sense to use for some projects.

10.02.2025 14:53 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0

What does the use of Make add to the process instead of just connecting chatGPT directly to the Google Sheet using an extension?

08.02.2025 11:11 โ€” ๐Ÿ‘ 1    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

Just joined a week ago and already loving it!
So much more focused than Elons nightmare. It reminds me of old Twitter.

01.02.2025 21:23 โ€” ๐Ÿ‘ 2    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

Zoomet lidt ud.

01.02.2025 12:53 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 0    ๐Ÿ“Œ 0
Post image

@mortendd.bsky.social Fruen og jeg har lige vรฆret ved Bรธrnnerup Havn i hรฅb om at se hรฆrfuglen (desvรฆrre var den vรฆk).

MEN vi sรฅ dyrespor i sandet ved havnen og lugtede kraftigt dyretis 50 meter derfra.
Aftrykkene var pรฅ stรธrrelse med 2/5-krone ca.
Kan du se, om sporene kan passe med Mรฅrhund?

01.02.2025 12:51 โ€” ๐Ÿ‘ 0    ๐Ÿ” 0    ๐Ÿ’ฌ 1    ๐Ÿ“Œ 0

@esbenrasmussen is following 19 prominent accounts