Product analytics used to feel like a quiet back-office concern.
Add an event library, send a few clicks and page views into a dashboard, wire up funnels, then let product and growth teams argue about activation rates. Useful, mostly invisible, and rarely part of the product conversation itself.
That version of analytics is becoming harder to defend.
UK SaaS teams now have three pressures landing at once. The Data (Use and Access) Act 2025 has changed parts of the PECR cookie and storage-access regime. The ICO finalised updated guidance on storage and access technologies in April 2026. At the same time, browsers and platforms keep pushing the web away from casual third-party tracking and towards more privacy-preserving patterns.
The practical lesson is not that analytics is dead. It is that product analytics needs to become more intentional.
For product-led software companies, this is a good thing. A cleaner analytics setup gives teams better data, fewer compliance headaches and a stronger trust story for customers.
Analytics is product infrastructure
Most SaaS products depend on analytics to make good decisions.
Teams need to know where users get stuck, which features are being adopted, whether onboarding works, and whether a release improved or damaged the customer experience. Without that feedback loop, product work becomes expensive guesswork.
The problem is that analytics often grows organically. One tool is added for marketing attribution. Another appears for session replay. A third powers in-app experimentation. Support adds customer health signals. A product manager adds custom events for a launch. Engineering adds logs that look suspiciously like behavioural tracking. Nobody sets out to build a messy tracking estate; it just accretes.
By the time a customer asks what is being collected, where it goes, or how they can opt out, the answer may involve a spreadsheet, two vendors and a nervous pause.
That is a product quality issue, not just a legal issue.
What changed in the UK
The basic UK position remains familiar: if you store information on a user’s device or access information already stored there, PECR can apply. The ICO’s cookie guidance says organisations must tell people what cookies or similar technologies do and why, and generally need consent unless an exception applies. It also makes clear that the rules are broader than classic browser cookies and can cover similar technologies used in apps and other devices.
The newer wrinkle is that the Data (Use and Access) Act 2025 created more flexibility for some low-risk uses. Legal commentary on the Act notes that, from 5 February 2026, consent is no longer required for certain categories of cookies and similar technologies, including some first-party statistical or analytics purposes where the data is used only by the service operator. For those cases, organisations still need to provide clear information and a free, simple way to object.
The ICO’s updated storage and access technologies guidance, finalised on 29 April 2026, is the key document to watch here. The update explicitly addresses the changes following the Data (Use and Access) Act and adds guidance on what a simple means of objecting should look like, and whether the same technology can be used for multiple purposes.
That distinction matters. A narrow, first-party analytics setup used to understand and improve a product is not the same thing as behavioural advertising, cross-site profiling or opaque third-party tracking. But teams need to design the difference into the product rather than assume it will be obvious later.
The trap: treating the exception as permission to sprawl
The worst response would be: “Great, analytics cookies are easier now. Let’s track more.”
That misses the point.
Even where a consent exception may apply, SaaS teams still need to be clear about purpose, data flows and user choice. If analytics data is personal data, UK GDPR duties can still apply. If data is shared with third parties for advertising or wider profiling, the position changes. If the same script does several jobs, some low-risk and some not, the whole setup becomes harder to explain and harder to defend.
Customers do not usually object to a product team learning that onboarding is broken. They do object to being surprised. They object to unclear vendors, hidden replay tools, vague cookie banners and product settings that say one thing while network requests suggest another.
The opportunity in 2026 is to turn analytics from a murky background process into an explicit part of the product’s trust layer.
Start with an analytics inventory
A sensible first step is a tracking inventory.
List every cookie, SDK, pixel, event pipeline, replay tool and analytics destination used across the website, app and customer-facing product. For each one, record:
- what it collects
- where it runs
- whether it stores or accesses information on the user’s device
- whether it collects personal data or can be linked to an account
- the purpose it serves
- who receives the data
- whether it is first-party or third-party
- how long the data is kept
- what control the user has
- which lawful basis, consent route or PECR exception the team believes applies
This does not need to become a theatre-grade compliance artefact. It needs to be accurate enough that product, engineering and leadership can make decisions from it.
The inventory usually reveals two uncomfortable things. First, several events are no longer used by anyone. Second, some tracking exists because it was easy to add, not because it answers an important product question.
Both are good news. Deleting weak tracking is one of the fastest ways to improve data quality and reduce risk.
Design analytics around questions, not exhaust fumes
Good analytics starts with product questions.
Are new users reaching their first useful outcome? Which parts of setup create support tickets? Are customers discovering the features that reduce manual work? Does a new workflow help people complete a real task faster? Are teams adopting the product beyond the initial champion?
Those questions lead to focused events. Focused events are easier to explain. They are also easier to govern.
Bad analytics starts with exhaust fumes: collect everything now, decide what it means later. That approach produces noisy dashboards, brittle funnels and awkward privacy conversations.
For most B2B SaaS products, a smaller set of well-named events is more valuable than a sprawling event swamp. Event names should describe meaningful product actions, not implementation trivia. Properties should be deliberately chosen.
In other words: analytics should be designed, not harvested.
Give users control without ruining the product
User control does not have to mean wrecking your measurement.
If a user objects to optional analytics, the product should respect that choice. The team can still measure essential service health, security, billing and operational needs where a proper basis exists. But optional product analytics should be separated cleanly enough that turning it off is technically real, not just a banner state.
That requires architecture.
Consent and objection choices should flow into the event layer before non-essential tools fire. Server-side events should not quietly recreate tracking a user has rejected in the browser. Admin settings for workspaces should be clear about whether they affect only marketing cookies, in-product analytics, session replay, or all optional measurement.
This is where product and engineering need to work together. A privacy setting that nobody understands is not a control. It is a liability with a toggle.
Watch international transfers and suppliers
Analytics stacks are rarely local.
Many SaaS teams send event data to US-hosted tools, cloud warehouses, customer success platforms and experimentation systems. The ICO’s international transfers guidance reminds organisations to consider when restricted transfers apply, what safeguards are needed, and who is responsible for compliance.
That does not mean UK teams must avoid modern tools. It does mean they should know where analytics data goes and whether customer contracts, data processing terms and transfer mechanisms match reality.
A procurement answer of “we think Mixpanel/Amplitude/GA/Segment handles that” is not enough. The product team should know which destinations receive which events, whether identifiers are personal data, and whether any sensitive business information can leak through event properties.
Again, this is easier if the event model is small and intentional.
Treat analytics changes like product changes
Analytics should go through the same discipline as other product work.
When adding a new event, ask why it is needed, who will use it, what decision it supports and when it should be removed. When adding a new vendor, review the data flow, retention, security posture and user-facing explanation. When launching a new feature, decide what minimum measurement is needed to learn safely without over-collecting.
The best teams make this lightweight. Add a short analytics section to product specs. Include tracking changes in pull requests. Keep the event catalogue close to the code. Review unused events quarterly. Make privacy and data protection part of the release checklist rather than an afterthought at banner time.
That approach is not anti-growth. It is how growth data stays useful.
The trust dividend
Customers increasingly ask better questions about data handling. Procurement teams want to know what is collected, where it is processed and how users are protected. Security questionnaires are expanding into privacy, AI and operational governance. Product-led companies that can answer those questions clearly look more mature than those that wave at a privacy policy and hope nobody asks for details.
A privacy-aware analytics setup also improves internal decision-making. Cleaner events mean better funnels. Fewer vendors mean fewer discrepancies. Clearer purposes mean less debate about whether a metric is safe to use. Better controls mean fewer surprises when customers audit the product.
Trust is not built by avoiding measurement. It is built by measuring the right things, for clear reasons, in ways users can understand.
A practical 2026 checklist
For UK SaaS teams, the next step is straightforward:
- audit cookies, SDKs, event pipelines and analytics destinations
- separate essential service measurement from optional product analytics
- remove unused events and vendors
- document the purpose of each meaningful event
- check whether the new PECR exceptions actually apply to your setup
- provide clear information and a simple objection route where required
- block non-essential technologies until the right consent or choice state exists
- review international transfers and supplier terms
- keep analytics changes in the product delivery process
The legal details matter, and teams should take proper advice for their own setup. But the product principle is simple: if analytics helps you improve the product, design it like part of the product.
In 2026, the strongest SaaS teams will not be the ones with the most tracking. They will be the ones with the clearest feedback loops.