searchAnalyse my site
Tools
Learn
Account
Signal & Flow Page
arrow_back All articles

When the Report Was Wrong, and What It Taught Us

Signal & Flow's own analysis tool returned a confident report declaring a perfectly functional website was broken. Here is what caused it, how we found it, and what changed.

format_list_bulleted Key findings
  • A silent failure produced a plausible-looking report from almost no information, with no visible error
  • The root cause was a single structured data field that could contain a list rather than a single value
  • Six separate issues were contributing to the same failure before the true cause was identified
  • AI tools that produce coherent output from bad input are harder to debug than crashes
  • The session also surfaced improvements to redirect handling, image string stripping, and listing block collapsing

There is a particular kind of uncomfortable that comes from your own tool telling you something is broken when you know, with complete certainty, that it is not.

That is exactly what happened during a recent test of Signal & Flow. A perfectly functional, well-built website came back with a report declaring the homepage had failed to load, the contact page did not exist, and the about page had returned a 404 error. The report concluded, with some confidence, that the site was effectively non-functional and that no visitor could access anything at all.

The site was fine. It had been fine all along.

6
Separate issues contributing to the same failure before the root cause was found
1
Line in a structured data field that caused all of them
0
Visible errors. The process completed normally and returned a result. It was just wrong.

The obvious answer

When an AI analysis tool fails to read a website correctly, the first thing anyone assumes is a JavaScript problem. A lot of modern websites build their pages dynamically. The content is not sitting in the HTML waiting to be read; it is assembled by scripts that run after the page loads. A basic crawler arrives, finds an empty shell, and reports back that nothing is there.

This is a known issue. It has a known fix. Signal & Flow already uses a reader that renders JavaScript before extracting content, precisely to avoid this.

So the first assumption was reasonable. It was also wrong.

The thing about debugging

Here is what nobody tells you about fixing software: the answer is almost never where you think it is.

You find the obvious problem and fix it. You test again. Still broken. You find another problem and fix that. Still broken. You add more logging, more visibility, more diagnostic output, and the logs show you something that makes no sense, because everything looks correct, and yet the result is still wrong.

This is the experience of building anything serious. Not the clean narrative of problem identified, problem solved. The messier reality of problem identified, fix applied, different problem surfaces, fix applied, third problem surfaces, realise the first two fixes were solving the wrong thing entirely.

Over the course of one session, we worked through six separate issues that were all quietly contributing to the same failure. Each one looked like the root cause. None of them were.

What was actually happening

The real culprit was a single line in a website's structured data.

Structured data is information that websites embed for search engines and other tools. It is a machine-readable description of what the business is, where it is located, what it sells. It follows a standard format. One of the fields in that format is the business type.

Most websites write their business type as a single value. Some websites, perfectly correctly and perfectly validly, write it as a list of values. A car dealership might describe itself as both a local business and a car dealer, because it is both.

Signal & Flow's analysis pipeline expected a single value. When it received a list, it did not crash. It did not throw a visible error. It quietly failed, returned an empty result, and the report was generated from almost nothing.

"The report was not making things up. It was doing its best with what it had been given. What it had been given was essentially nothing."

Why silent failures are the hardest kind

A crash is easy. A crash tells you something went wrong, where it went wrong, and often why. You fix it and move on.

A silent failure is different. Everything appears to be working. The process completes. A result is returned. The result is just wrong, and unless you are looking very carefully at the right place, you might not know.

This is a particular challenge when building AI-powered tools. The AI component is capable and articulate. Give it almost no information and it will still produce something coherent and well-written. That is a strength when the information is good. It becomes a problem when the information is absent, because the output looks plausible even when it is not.

The fix was small in the end. A few lines of code to handle the case where a business type is a list rather than a single value. But finding it required logging the pipeline at every stage, reading what was actually being passed between each step, and being willing to keep looking even when the previous fix seemed like it should have worked.

What changed

Beyond fixing the specific issue, the session produced something more useful: a clearer picture of where the pipeline needed to be more robust.

Signal & Flow now handles multi-value structured data types correctly. It strips a category of long, meaningless image reference strings that were consuming analysis capacity without contributing anything useful. It collapses repeated listing blocks, the kind you find on product pages with dozens of identical card structures, so the analysis focuses on the site rather than the inventory. It follows redirects to ensure the correct version of a URL is always crawled, regardless of how a user types it.

None of these were the headline bug. All of them matter.

A debugging session that starts with one failing report and ends with five pipeline improvements is not an unusual outcome. The original failure is the thing that gets you looking. What you find while looking is usually more valuable than the fix itself.

The part that made it worthwhile

After all of that, the final report came back on the site that had originally failed.

It was good. Genuinely useful. It identified real issues: a grammar error in the meta description repeated twice in quick succession, a compelling brand story buried on an about page that most visitors would never reach, a regulatory credential mentioned only at the bottom of the contact page rather than alongside the finance options it related to. Specific, accurate, actionable.

That is what this is supposed to do. Not produce a report. Produce insight.

The wrong report was frustrating. Finding out why it was wrong was instructive. Building something that gets it right is the point.

Want to see what Signal & Flow finds on your site? The pipeline that went through all of this is the one running your analysis. It is more robust for it.

Analyse my site

Ready to audit your own site?

Professional analysis. Results in 30 seconds, from £2.99.

search Analyse My Website