Anonymize Images

How Journalists Anonymize Images to Protect Sources

A source's safety can depend on a single photograph. Here's how investigative journalists, human rights researchers, and activists anonymize images — and what tools they use.

By Blurify··10 min read

Try Blurify free — blur & redact in your browser, nothing uploaded.

Open tool →

Why image anonymization is critical in journalism

When an investigative journalist publishes a story, the images they use can be as dangerous as the words. A visible face in a crowd photograph, a distinctive tattoo on a protest march, the unique vehicle parked outside a safe house, a name tag on a hospital employee — any of these details can identify sources, witnesses, or activists to hostile actors.

The stakes are not theoretical. Human rights organizations and press freedom groups document cases every year where sources were identified from published images, with consequences ranging from dismissal and harassment to imprisonment and violence. The obligation to protect sources does not end at the written word — it extends to every pixel in every image that accompanies a story.

At the same time, audiences deserve evidence. Visual documentation is often the most compelling and credible element of an investigation. The challenge journalists face is publishing compelling images while ensuring that the details which could identify vulnerable individuals are removed before publication. This requires a deliberate, systematic workflow — not an afterthought.

What makes an image identifying?

Most journalists instinctively think about faces when they consider anonymization. But faces are just one of many identifying features that need to be considered in a rigorous pre-publication review:

  • Faces and facial features — the most obvious target, and the primary input for facial recognition systems deployed by governments, law enforcement agencies, and commercial data brokers in dozens of countries. Even a low-resolution or partially obscured face can sometimes be matched by modern recognition systems.
  • Tattoos and distinguishing physical marks — law enforcement databases in many jurisdictions include tattoo imagery. A distinctive tattoo on a forearm, neck, or hand can identify someone as precisely as a fingerprint in a jurisdiction with a comprehensive tattoo database.
  • Clothing and accessories — a unique jacket, an unusual ring, distinctive footwear, or a branded item of clothing can identify someone across multiple photos taken at the same event, even when the face is obscured. Cross-referencing across multiple images from the same scene is a standard open-source intelligence (OSINT) technique.
  • Vehicle license plates— trivially cross-referenced against national vehicle registration databases by anyone with law enforcement access. Even a partial plate dramatically narrows a search. Plates visible in the background of a photo — not just on the subject's vehicle — should be considered.
  • Location context and background details — distinctive architecture, visible street signs, shop names, landmarks, or even distinctive vegetation in the background can reveal where a source lives, works, or met with a journalist. Location intelligence derived from image backgrounds is a growing OSINT discipline.
  • Device screens and documents — a visible phone screen, computer monitor, or paper document in the background of a photo can expose account usernames, email addresses, open applications, or confidential document titles — information that may identify a source or compromise an ongoing investigation.
  • Reflections — windows, mirrors, glasses lenses, and other reflective surfaces can inadvertently reveal what is behind the camera — including the journalist, the location, or equipment — even when the main subject is properly anonymized.

Before publishing — after anonymization

Before — identity visible

Blurred

After — identity protected

The metadata problem

Every digital photograph carries embedded metadata — information stored inside the image file itself, invisible to the viewer but readable by anyone with the right software. EXIF (Exchangeable Image File Format) data embedded in smartphone photographs typically includes:

  • GPS coordinates — precise latitude and longitude of where the photo was taken, often accurate to within a few meters
  • Timestamp — the exact date, time, and sometimes timezone when the photo was captured
  • Device information — make, model, and sometimes serial number of the camera or smartphone used
  • Software and editing history — which applications were used to open, edit, or process the image and when
  • Camera settings — aperture, shutter speed, ISO — which can reveal details about the equipment used

A source who photographs a document in their office and sends it to a journalist may inadvertently embed their precise office location, the exact time they took the photograph, and the make and model of their personal phone. This combination of metadata can be sufficient to identify and locate the source even if no visual identifying features are present in the image content itself.

The John McAfee case is one of the more widely publicized examples: journalists published a photograph of McAfee while he was in hiding, and the EXIF data embedded in the image included GPS coordinates that revealed his location in Guatemala. This is not an isolated incident.

Blurify's exports do not carry EXIF data from the original file. When you export a blurred image from Blurify, the output is a clean PNG, JPEG, or WebP file with no location data, no timestamps, and no device information inherited from the original. For complete metadata protection on the source's end, sources should strip metadata from images on their own device before transmitting them using a tool like ExifTool, MAT2, or the built-in metadata stripping in the Signal messaging app.

Facial recognition and the changing threat landscape

Facial recognition technology has become dramatically more accessible and capable over the past decade. Tools that once required nation-state intelligence budgets are now available commercially or freely on the open web. Services like PimEyes allow anyone to upload a photograph and search for matching faces across billions of indexed web images. Commercial law enforcement databases cover hundreds of millions of faces.

This changes the calculus for journalists significantly. A partially obscured face in a published photograph that would have been considered adequately anonymized a decade ago may now be matchable with high confidence. Even side-profile views and images taken from behind — which were historically considered safe — can now be matched by gait recognition and ear biometrics systems deployed in some jurisdictions.

Best practices for face anonymization have evolved accordingly:

  • Use heavy blurring — a radius of at least 20–30 pixels in a tool like Blurify, applied to the entire face region including hair, ears, and the top of the forehead. The face oval alone is not sufficient.
  • Prefer solid redaction over blur for high-risk subjects — a solid black box is more resistant to AI-based reconstruction than a blurred image. Research has demonstrated that Gaussian-blurred text and images can be partially reconstructed using super-resolution neural networks. A solid black fill leaves nothing to reconstruct.
  • Cover the full head, not just the face — hair shape, color, and texture are used as supporting signals by some recognition systems, particularly for individuals with distinctive hairstyles or head coverings.
  • Consider illustrated or silhouette replacements for the highest-risk cases— replacing a face entirely with an artist's illustration, a silhouette, or an avatar provides categorically stronger anonymization than any blur or redaction, because there is no original pixel data to recover. Publications covering high-risk investigative journalism increasingly use this approach.

Practical workflow for anonymizing images before publication

Step 1: Audit every image for identifying features

Before editing, examine the full image carefully — not just the main subject. Work from foreground to background. List every face, tattoo, license plate, location marker, visible document, and device screen you can find. Pay attention to reflections and background details. Zoom in on parts of the image you might otherwise overlook.

For group photos or crowd images, this review can take significant time. Build it into your pre-publication timeline. A missed identifying feature discovered after publication — or, worse, by a hostile actor — cannot be undone.

Step 2: Strip metadata from the original

Before any editing, strip EXIF metadata from the original file. Use ExifTool (command-line), MAT2 (Metadata Anonymisation Toolkit, available on Linux), or a browser-based EXIF stripper. This step should ideally happen on the source's device before the image is transmitted, but at minimum it should happen before the image enters your publishing workflow.

Signal, the encrypted messaging application, automatically strips metadata from photos when they are sent — making it a good channel for receiving sensitive images from sources. Other messaging platforms do not reliably strip metadata.

Step 3: Apply redactions in Blurify

Open the image in Blurify. For maximum protection in high-risk journalism contexts, use Redact mode (solid black fill). For lower-risk contexts where visual continuity matters more, a heavy Gaussian blur (radius 20–30px) provides good protection.

Cover every identifying feature from your audit in step 1. Use the freehand tool for irregular shapes — tracing around a tattoo, an unusual piece of clothing, or an awkwardly positioned face is more precise and complete than a rectangle. Draw shapes slightly larger than the feature itself to ensure full coverage at the edges.

For images with many faces, use the Detect Faces button to automatically generate blur shapes for all detected faces, then manually review the result to add shapes for any faces the detector missed and confirm no shapes were incorrectly placed.

Step 4: Export and verify

Export the redacted image and open the exported file in a fresh browser tab or image viewer. Zoom in to every redacted region and confirm that the original content is fully covered with no visible edges or partial reveals. Pay particular attention to the margins of blur shapes where the effect tapers — a blur that softens at the edges may leave facial features or text partially visible.

For critical publications, have a colleague independently review the exported image with fresh eyes. A second person often catches features that the person who did the original editing has become visually accustomed to.

Step 5: Use only the exported file in your publishing pipeline

Only the exported, redacted file should enter your publishing workflow, CMS, or email chain. The original file should be stored in a secure, access-controlled location — if it needs to be preserved as evidentiary material — or securely deleted using a tool like BleachBit. Never share the original file through the same channels as the publication.

Tools used by journalists and researchers

Organizations like the Electronic Frontier Foundation (EFF), Access Now, and the Freedom of the Press Foundation recommend a layered approach to image security in journalism:

  • Image anonymization: Blurify for browser-based, no-upload blurring and redaction of photos, videos, and PDF documents. The tool never receives or stores your images, which is critical when working with sensitive source material.
  • EXIF metadata stripping: ExifTool (cross-platform command-line tool), MAT2 / Metadata Anonymisation Toolkit (Linux, designed for journalists), or Image Scrubber (browser-based, no upload). These should be applied to original files before editing.
  • Secure source communication: Signal for receiving sensitive images from sources. Signal automatically strips EXIF metadata from transmitted photos and provides end-to-end encrypted transport.
  • Secure file deletion: BleachBit (cross-platform) or the secure empty trash feature on macOS for securely overwriting original files after they have been archived or are no longer needed.
  • Secure operating environment: For the most sensitive journalism, the Tails operating system (which routes all traffic through Tor and leaves no traces) provides a secure environment for handling sensitive materials.

The ethical dimension

Anonymizing images is not just a technical precaution — it is an ethical responsibility. A journalist who publishes an identifiable image of a source after implicitly or explicitly promising confidentiality has broken a fundamental commitment, regardless of their intentions. The commitment to source protection is not conditional on technical difficulty. If the workflow to do it correctly requires an extra ten minutes, that time is owed to the source.

This principle extends beyond traditional journalism. Human rights documentarians, academic researchers working with vulnerable populations, social workers, healthcare professionals, and NGO field workers all operate under analogous obligations when they capture or handle images of people in sensitive contexts.

The practical tools to fulfill this obligation are freely available and, in the case of browser-based tools like Blurify, require no technical expertise. The question is not whether anonymization is difficult — it isn't — but whether it is built into the workflow as a standard step rather than an afterthought.

Frequently asked questions

Is blurring a face enough to defeat facial recognition?

A strong Gaussian blur (radius 20px or more, covering the full head including hair and ears) significantly degrades the quality of the facial data and substantially reduces recognition accuracy for most commercial systems. For the highest-risk cases — subjects facing hostile government surveillance or sophisticated threat actors — a solid black redaction box is preferable, as it leaves no pixel data to analyze.

Does Blurify strip metadata from exported images?

Yes. Blurify's exported images do not carry EXIF metadata from the original file. The output is a clean flat image. However, you should still strip metadata from the original source file before it enters your editing workflow, using a dedicated EXIF removal tool.

Can I anonymize video footage?

Yes. Blurify supports video files (MP4, WebM, MOV) in addition to images. You can draw blur or redaction shapes over faces and other identifying elements in video footage, and use the keyframe system to animate blur regions to track moving subjects. Video processing runs entirely in the browser using ffmpeg.wasm.

What about audio — can voices identify sources?

Voice identification is a separate and equally important concern for broadcast journalists. Pitch-shifting and voice modulation tools are commonly used to disguise interviewee voices. This is outside the scope of image anonymization but should be part of the same source-protection workflow for any multimedia content.

100% free · no sign-up

Try Blurify for free

Blur faces, redact documents, and censor screenshots — all in your browser. Nothing is ever uploaded to our servers.

Open Blurify — it's free →

Related articles