Notice: Your browser is not supported or outdated so some features of the site might not be available.

How We Test Running Shoes  
Transforming Data Into Objective Reviews

 7
Updated 
A lineup of running shoes displayed with other running shoes in the background.

For the past decade, we've been publishing trustworthy reviews for various products, and now we've expanded our expertise to running shoes. We purchase every pair independently—no cherry-picked samples or freebies from manufacturers. Our rigorous process involves multiple teams and experts, from purchasing to publishing. We do more than just run in these shoes; we use high-tech testing tools to measure various performance aspects with objective, data-driven results. As we continue to develop, we will add tests to our methodology. If you have suggestions for things you'd like to see us test, let us know in the comments below.

Each running shoe review includes dozens of individual tests, taking many work days to complete. You're in the right place if you're wondering what goes into producing our running shoe reviews. Below, we'll break down how we deliver objective, data-driven reviews.

Product Selection

Before testing any running shoe, we first buy it. With hundreds of running shoes released each year worldwide, we can't test them all. Instead, we primarily purchase and test models available in the United States that we believe will be most relevant to runners based on popularity and requests. We're currently focusing on road running shoes but plan to expand to more categories.

A screenshot of the Review Pipeline page.
You can keep track of progress in the Review Pipeline page.

There are two primary ways we choose which shoes to buy:

  1. Popular, New, or Recommended Models: We monitor new releases from major brands like Nike, Adidas, Brooks, On, and New Balance, as well as smaller specialty brands. We also frequently refresh our recommendation articles like 'Best Running Shoes' with newly released models in each shoe category to always reflect the most relevant picks for runners. We also consider models that receive a lot of attention in online running communities.
  2. Voting Tool: Readers can use our voting tool to vote for the running shoes they want us to review next. Each user gets one vote every 60 days (or 10 votes for Insiders). If a model reaches 25 votes, we purchase and test it!

We don't accept review samples from manufacturers; instead, we purchase directly from Amazon, online shoe retailers, and local physical stores. Once they arrive, we document and photograph them before sending them off for testing.

Our Philosophy

We provide authentic product reviews, free from bias or external influence, ensuring our users can trust our transparent and rigorous testing process to make informed decisions while maintaining editorial independence. Every shoe, from high-end super shoes to budget daily trainers, undergoes the same thorough evaluation without influence from brands or advertising.

This principle guides how we test every running shoe. Price and expected performance don't influence our scoring—if an expensive shoe underperforms compared to a cheaper one, it won't receive a higher score based on brand or cost.

Testing

Before we get into the specifics, let's discuss why and how we test shoes. After unboxing and photographing a pair, we pass it to a dedicated tester. We rely on standardized procedures for every review, so you can compare a budget trainer and a premium carbon-plated racer on the same scale—price and marketing claims aren't factored into our scores. We also continuously refine our methodology to stay current with new tech.

A progress tracker showing a product review workflow, currently at the "Testing" stage, with earlier steps completed and later steps pending.
Our review tracker lets you easily track the progress of each review.

Our initial Test Bench 0.8 introduced rigorous tests for energy return, stability, and cushioning. As we update our process, we'll retest shoes so our findings stay relevant. That said, results from different test benches might not be directly comparable. We see our tests as just a starting point; if runners online raise specific questions, we investigate further. Our standardized methodology is structured into two test categories: Design and Performance.

DesignPerformance
WeightHeel Energy ReturnForefoot Energy Return
ShapeHeel CushioningForefoot Cushioning
Stack HeightsHeel FirmnessForefoot Firmness

Design

You might wonder how exactly we can "test" a shoe's design. We don't really judge style or aesthetics—we focus on measurable aspects of a shoe's build so you can decide if it'll fit well on your feet and meet your running needs. These measurements are specific to our test size (a men's US size 9). Shoe weight, shape, and stack heights can vary between sizes or even between men's and women's versions, so keep that in mind when comparing our results to your own shoe size. Below, we break down three major design factors we measure: Weight, Shape, and Stack Heights, so you can easily see how a shoe stacks up against others on the market.

Weight

We weigh each shoe (with laces and insole) multiple times using a scientific scale, ensuring we capture an accurate reading for both the left and right shoe in the same size. We then use the average weight to compare different models. This is important because weight differences between shoes—such as a super shoe versus a daily trainer—can impact overall running efficiency.

An adidas Adizero Adios Pro Evo 1 placed on a precision scale, showing a weight of 133.14 grams.
Our weight photo for the right shoe of the adidas Adizero Adios Pro Evo 1.

Shape

We examine each shoe's construction, both internally and externally, to understand its overall fit and shape. Using the Artec Space Spider 3D scanner technology, we visualize the upper and outsole to assess how the design accommodates various foot types. Additionally, by measuring internal dimensions such as heel, arch, and forefoot width, we provide insights into whether the shoe will feel snug, spacious, or in between. Also, the measurements of the outsole widths contribute to our assessment of the shoe's lateral stability.

Stack Heights

We examine the thickness of the cushioning by measuring the shoe's midsole at key points along the heel and forefoot. This tells us how much material stands between your foot and the ground, which can significantly influence how the shoe feels on the run. Following World Athletics guidelines, we use a caliper to measure these stack heights while the shoe rests on a flat surface. It's worth noting that World Athletics regulates maximum stack heights, which are mostly only relevant for elite marathon runners. We recommend consulting their current guidelines on approved and banned shoe models for the latest information. From these measurements, we also calculate the heel-to-toe drop—an important spec that affects running mechanics and can guide you toward a shoe that matches your preferred running style and strike pattern.

Other Features

In addition to these attributes, we assess other design elements crucial to performance. The tongue gusset type and the presence of a plate are important since they impact fit and lateral stability during runs. We evaluate the tongue design by determining whether it's fully gusseted, semi-gusseted, or non-gusseted. When examining plates, we specify materials such as carbon fiber, plastic, or fiberglass alongside design variations like rods or shanks. We also consider any additional unique design aspects or community feedback impacting overall performance.

Performance

To evaluate a running shoe's performance, we rely on a set of standardized mechanical tests that allow us to objectively measure how it behaves under real-world running forces. While the feel and fit are important, our performance tests focus on what the shoe actually does—how it compresses and rebounds under stress. These results help runners understand how a shoe feels out of the box and how it will behave under their specific stride and pace. For a much deeper dive into this process—including data breakdowns, force curves, and mechanical insights—you can check out our full Compression Testing & Energy Return Research article. It walks through how we built the testing setup, how the results are interpreted, and how this data connects to what you actually feel on the run.

Compression

One of the core elements we measure is compression—how much and where a shoe compresses under force. This helps determine how cushioned the shoe feels during loading and landing. We use a universal testing machine with an electrodynamic linear actuator and a load cell at settings designed to simulate the shape and pressure of a human foot striking the ground repeatedly.

We test all shoes across a consistent force range—from 300 to 900 newtons—to reflect the variety of real-world running scenarios. This range helps us capture how the midsole behaves under both light and heavy loads. We use the full force range to calculate an overall cushioning score based on the average compression across that spectrum. To account for different strike patterns and the varying densities or layering of foams, we also split our measurements into heel and forefoot compression. This distinction helps reveal how the shoe responds depending on where a runner lands—whether you're a heel striker, midfoot striker, or forefoot striker. In general, lighter forces are more representative of smaller runners or slower paces, while higher forces simulate heavier runners, faster efforts, and more forceful landings. This approach ensures our results are relevant to a wide range of running styles and body types.

A heel compression test being conducted on the adidas Adizero Adios Pro 4 using a universal testing machine, with a cut shoe placed between the test head and base plate. A camera is set up nearby to record the process.
Heel compression testing on the adidas Adizero Adios Pro 4, using a universal testing machine (UTM) with test head and base plate attachments installed.

Energy Return

We also evaluate how efficiently the shoe's foam rebounds after compression—commonly called energy return. This indicates how much energy is given back to the runner during decompression, directly influencing how responsive and efficient the shoe feels, especially at faster paces.

We can determine how much of the applied energy is returned with each step by testing how the foam deforms and recovers under a full compression cycle. This makes it easier to identify shoes that feel bouncy and fast versus those that prioritize impact absorption. We test this across a consistent force range from 300 to 900 newtons, just like compression, since energy return can vary depending on how hard you land, where your foot strikes, and pace. We also measure energy return separately at the heel and forefoot to reflect differences in strike patterns and foam placement. This allows us to capture how different zones of the midsole behave—particularly in shoes with dual-density foams or midsole designs that are more energetic in one area than another.

A cross-section of the HOKA Mach X 2 showing its dual-foam midsole.
A side view of the HOKA Mach X 2. It has a dual-foam midsole with more EVA-based (orange) foam in the heel and a higher concentration of PEBA-based (white) foam in the forefoot, affecting energy return results accordingly.

Firmness

We also evaluate firmness, which describes how soft or hard the midsole feels underfoot across different levels of force (from 300N to 900N). This is particularly important because preferences can vary: some runners want a soft, plush ride for comfort, while others prefer a firmer platform that enhances lateral stability.

We test firmness at the heel and forefoot using a linear actuator paired with a load cell, compressing the shoe up to 900 newtons. This range allows us to simulate different body weights, foot strike patterns, and running intensities. Because firmness can change depending on how hard and where you land, this test helps us provide a more complete picture of how the shoe will feel for different runners. Like other performance tests, we break down firmness results by heel and forefoot to account for shoes with dual-density foams and strike patterns.

Real-World Testing

The mechanical tests give us objective, repeatable data, but we also believe running in the shoes is essential. Our in-house testers wear each model we test for a significant number of miles across a range of paces, conditions, and workouts. This real-world usage helps us verify that the lab data matches what runners actually experience on the road. To keep things consistent, our tester always uses one dedicated pair of each shoe for lab testing and a separate pair for logging miles in.

To stay closely aligned with the actual experience of running in each model, we also maintain an internal pool of testers from across departments who borrow shoes and provide structured feedback. We're not looking for isolated anecdotes or one-off opinions—we look for patterns and consensus. When multiple testers experience the same sensation or issue, and it aligns with our lab data, that insight gets integrated into the review. This combination of hands-on testing and data validation helps us catch any discrepancies and deliver a well-rounded, accurate picture of a shoe's performance.

Writing

Once testing is complete, the writing process begins. We aim to take complex lab data and translate everything into a clear, helpful review that reflects how the shoe performs for different types of runners.

The writer's job starts with reviewing all test results and tester notes. If anything is unclear or unexpected, the writer works directly with the tester to validate the findings and ensure everything lines up. If something doesn't add up—we'll pause and recheck the testing before moving forward. Transparency is key; every claim we make in the review is backed by data and widely observed consensus from testing.

We also take the time to understand the shoe's place in the market. Before writing begins, the writer researches where the model fits in the brand's lineup, whether there are other versions or variants (like wide versions), and how it compares to other shoes we've tested—both direct competitors and past iterations.

Writing itself typically takes 1–2 business days, followed by a structured peer-review process involving the original testers and a second writer. This process ensures that all the information is accurate, the conclusions are fair, and the review stays free of bias. Once the peer review is complete, the article goes to our editorial team for a final pass. They edit all reviews (and other content we publish) to ensure they follow our internal style guides and guidelines. They catch any inconsistencies, identify unclear language and formatting issues, and ensure that our writing meets the expected standard.

Overall, each running shoe review is the result of collaboration between test engineers, testers, writers, photographers, and editors.

A screenshot of the verdict summary for the Nike Alphafly 3.
The Our Verdict section for the Nike Alphafly 3.

Rewrites and Updates

Publishing a shoe review isn't the end of the process—it's part of a longer lifecycle. We keep every pair of running shoes we test, storing them in our lab so we can revisit models when needed. This gives us the flexibility to update reviews when our test methodology evolves, when community questions arise, or when we want to compare new models to previous versions on equal footing.

As we introduce major updates to our testing bench, we'll go back and retest older models so that scores remain consistent and directly comparable. We don't retest every aspect of the shoe, only the relevant parts that are affected by the update.

Recommendations

After we've tested a shoe and published the review, we take a step back and consider whether we should include it in one of our recommendation articles—our curated best-of lists designed to help runners find the right pair for their needs. These articles are updated regularly, but when a standout model comes along, we'll often update a guide immediately to reflect its inclusion.

When deciding which shoes to recommend, we focus primarily on performance and how each model fits within the broader running shoe landscape. However, we also consider a few practical factors. If two shoes perform similarly but one is significantly cheaper without meaningful trade-offs, we'll favor the more accessible option. We also avoid recommending shoes that are nearly impossible to find—availability matters, and we aim to recommend models that runners can actually buy.

Our recommendation writers are the same people who write the reviews, so they understand the data and the context behind it. If a shoe has strengths that don't show up directly in our scores, they'll highlight those features.

These recommendations are built to serve most runners but aren't one-size-fits-all. If the listed models don't quite match your preferences, you can explore all our data using tools like the comparison tool, filterable tables, or our 3D shape compare tool to make your own personalized choice. Our goal is to give you the clearest possible picture of what each shoe can do so you can confidently find the one that's best for your needs.

Conclusion

When you see our detailed and trustworthy running shoe reviews, there's a lot of work behind them. From unpacking and photographing to testing, writing, and editing, each review goes through a structured, unbiased process to ensure consistency and accuracy. This article is just a quick overview of how we test running shoes. To learn more about our specific methods, check out our research articles.

Comments

  1. Article

How We Test Running Shoes: Transforming Data Into Objective Reviews: Main Discussion

What do you think of our article? Let us know below.


Want to learn more? Check out our complete list of articles and tests on the R&D page.

PreviewBack to editorFormat guide
Sort by:
newest first
  1. 2
    1
    0
    1
    0

    Does this only occur for video content that explicitly includes Dolby Vision metadata?

    You can enable Dolby Vision output on a device like the UBP-X700, but still play standard HDR10 content. The player will output a DV signal and the TV shows the picture modes to match. I can’t see anything unusual with DV enabled or disabled for standard HDR10 content.

    edit: I also checked the LG Amaze Demo, single layer dvhe, and it appears to same through the TV Media Player and the UBP-X700. Perhaps this is an issue with a particular version of an app?

    Edited 6 years ago: Added LG Amaze Demo comments
  2. 2
    1
    0
    1
    0

    Does this only occur for video content that explicitly includes Dolby Vision metadata? You can enable Dolby Vision output on a device like the UBP-X700, but still play standard HDR10 content. The player will output a DV signal and the TV shows the picture modes to match. I can’t see anything unusual with DV enabled or disabled for standard HDR10 content. edit: I also checked the LG Amaze Demo, single layer dvhe, and it appears to same through the TV Media Player and the UBP-X700. Perhaps this is an issue with a particular version of an app?

    Yes, this was tested with content that includes the Dolby Vision metadata. Visually, the difference isn’t that noticeable, but it is measurable.

  3. 2
    1
    0
    1
    0

    Does this just affect Dolby Vision? Or does it affect HDR10 too

  4. 2
    1
    0
    1
    0

    Does this just affect Dolby Vision? Or does it affect HDR10 too

    Just Dolby Vision.

  5. 2
    1
    0
    1
    0

    Thanks for the information. I heard Sony is pushing a new update as of last week, but I don’t know the details. Will you post back here if it is fixed via an update?

  6. 2
    1
    0
    1
    0

    Thanks for the information. I heard Sony is pushing a new update as of last week, but I don’t know the details. Will you post back here if it is fixed via an update?

    Yep! We don’t know when we will be able to retest this though, so it might take a little while.

  7. 2
    1
    0
    1
    0

    Yep! We don’t know when we will be able to retest this though, so it might take a little while.

    Have you had a chance to retest this yet?

  8. 0
    -1
    -2
    1
    -2

    Does using smoothness setting in game mode increase input lag?

  9. 2
    1
    0
    1
    0

    Have you had a chance to retest this yet?

    Thanks for checking in, unfortunately we haven’t retested this yet. We’ll follow up once we have an ETA and will let you know if we find any more info in the meantime.

  10. 5
    4
    3
    4
    0

    I can confirm the issue still is present on my 940 that is up to date firmware.

    When I play with the settings, it seems to be that X-Tended dynamic range is not functional when you put an external Dolby Vision source in. You can turn it on but none of the settings take effect.

    If someone could get ahold of Sony so that they can fix this in the next firmware, that would be awesome. That way I won’t have to get rid of this for the 950.

    Let’s do this!

  11. 3
    2
    1
    2
    0

    I can confirm the issue still is present on my 940 that is up to date firmware. When I play with the settings, it seems to be that X-Tended dynamic range is not functional when you put an external Dolby Vision source in. You can turn it on but none of the settings take effect. If someone could get ahold of Sony so that they can fix this in the next firmware, that would be awesome. That way I won’t have to get rid of this for the 950. Let’s do this!

    I still have this issue on my new A8G. I have contacted Sony support and they believe the issue is with my Apple TV (typical deflecting). I don’t know how to convince them this is a systemic problem across the board… I even linked them to RTINGS on the topics, but I’m assuming they won’t click due to internal security policies on clicking links on ‘untrusted’ sites. You’d think they have some internal QA to confirm this before dismissing it. Out of curiosity, I’d like to see if a Sony 4K Blu-ray player has the same issues. If they did do Dolby Vision tests over HDMI, I’m willing to bet it was with other Sony equipment only. Not sure what to do from here… it’s very disappointing to have a Dolby Vision compatible TV that… can’t quite fully do Dolby Vision.

  12. 4
    3
    2
    3
    0

    I will text my Sony blu ray as well as the new Nvidia Shield Tv later today and post the results. My guess is that they also suffer from the issue

    Fun test…load up the opening sequence to Another Life on Netflix. It’s very dark black and inky. When you load it on the Apple TV with dolby vision enabled, its full of banding and shades of gray mess. When you go to the stock app on the tv itself, it’s perfectly black and inky. No banding at all. There’s a definite issue here.

  13. 5
    4
    3
    4
    0

    Thanks for checking in, unfortunately we haven’t retested this yet. We’ll follow up once we have an ETA and will let you know if we find any more info in the meantime.

    I can confirm after testing the 700 UHD blu ray player as well as the new nvidia shield tv (which supports dolby vision) do the exact same thing as the Apple TV when set into hdmi 3 on the set. The x-tended dynamic range refuses to engage and thus, it is not nearly as bright as it is when you run something a DV video from the native Netflix app.

    Sony needs to fix this!!!

  14. 3
    2
    1
    2
    0

    Thank you for testing this. I believe I will be passing on this tv. Plus, Costco has now listed the Vizio Quantum X for $1k in their Black Friday ad and the Hisense H9F is $900. I believe my choice will be between those two.

  15. 3
    2
    1
    2
    0

    I gave up and ended up getting the Z9F. The G is like 8k and costs more than 12k dollars lol.

    I can confirm that after testing multiple dolby vision devices that the Z9F (75” at least) does NOT have this same problem. The screen is just as bright via external devices (including single layer profile devices like Apple TV and shield tv ) as it is on the apps built in.

    That said, I can’t seem to get plex or kodi to bitstream Atmos or DTS:X via eArc…but that’s a different story all together….

  16. 1
    0
    -1
    0
    0

    Any update on this? I wanted to buy a uhd player on black Friday but will likey save the money if the problem can’t be resolved.

  17. 3
    2
    1
    2
    0

    Anyone?

  18. 3
    2
    1
    2
    0

    My guess is that Sony is not going to address it. I would stay away from any of these units.

    I know the Z9F is not subjected to it however.

  19. 2
    1
    0
    1
    0

    Too bad. I love this TV. DV looks awesome on Netflix and Amazon. Currently I’m using my Xbox which isn’t that great of a player and can’t do DV via disc.

  20. 3
    2
    1
    2
    0

    Are u still gonna re-test or is this off the table now, Adam?

  21. 2
    1
    0
    1
    0

    In the mean time is there any way to compensate? Do I simply have to switch to HDR10 on my external devices?

  22. 3
    2
    1
    2
    0

    I mean if Sony won’t fix this(owning the 900f) for over a year now will my warranty work? Best Buy. I bought this set for Dolby vision with atmos. Clearly I can’t get that because arc doesn’t do atmos so I can’t get Dolby vision on external devices to function correctly. So who this be considered a defect? I sure think so as it’s a selling feature of the tv that is broken. So they should cover the warranty and swap the tv.

  23. 3
    2
    1
    2
    0

    The device was Dolby Vision certified (I would assume).

    What about contacting Dolby and telling them one of their partners are selling a high end product that is broken?

  24. 2
    1
    0
    1
    0

    In the second reply by Rtings, Adam says the difference isn’t that noticeable but is measureable. I’m not sure what problem jaretgale is having, but it doesn’t sound related to a barely noticeable difference in brightness. Almost sounds like a configuration issue with the Apple TV.

    Edited 5 years ago: update
  25. 0
    -1
    -2
    1
    -2

    Just bought the Panasonic 820 uhd player and to me it seems like dolby vision is a tad bit too bright on my set.

  26. 3
    2
    1
    2
    0

    Apologies for taking so long to get back to you all about this.

    We re-tested with the latest update (Android 9.0) and there’s still a measurable difference, but again, it isn’t that noticeable.

  27. 3
    2
    1
    2
    0

    Apologies for taking so long to get back to you all about this. We re-tested with the latest update (Android 9.0) and there’s still a measurable difference, but again, it isn’t that noticeable.

    But is it too bright or too dark?

  28. 2
    1
    0
    1
    0

    But is it too bright or too dark?

    I think the more important question is; Which one is correct? It should follow the baked-in metadata.

    It’s probably just a slight EOTF difference. …I mean it is slight, right? We’re not talking greater than or less than 20-30 cd/m2?

  29. 1
    0
    -1
    1
    -1

    I think the more important question is; Which one is correct? It should follow the baked-in metadata. It’s probably just a slight EOTF difference. …I mean it is slight, right? We’re not talking greater than or less than 20-30 cd/m2?

    Everything looks fine from my Panasonic ub820. Tho I’m not using the build in apps of said player. But dv movies look stunning.

  30. 1
    0
    -1
    1
    -1

    Perhaps this is why Sony includes a Dolby Vision “Bright” mode? Do any other TV brands in include that mode?

    Edited 5 years ago: Spelling
  31. 2
    1
    0
    2
    -1

    I also have the Panasonic ub820 and found that with the adjustments available, I was able to get decent DV/HDR performance.

  32. 2
    1
    0
    1
    0

    Hi, I’ve had the X9500G in the 55" for about a week now, just curious to know your thoughts about the local dimming. I see in your review it’s suggested to turn local dimming off, but I found that the black level suffered when doing this, I have it on medium which, to me, seems the best setting for dark scenes, however, it does not totally negate the blooming, especially when viewing end credits in movies. Do you think if I put some subtle bias lighting behind the screen that it might make the blooming in dark scenes less noticeable? I’d be interested in any thoughts on this…. Thanks

  33. 2
    1
    0
    1
    0

    I havent noticed any noticeable difference between the internal apps and external apps running Dolby Vision on my X950G. If there is any difference it would be hardly noticeable. I also havent noticed too much blooming or DSE either.

  34. 2
    1
    0
    1
    0

    X-tended dynamic range setting? High, Medium, Low or Off? Settings image cut off bottom below Auto Dimming

  35. 2
    1
    0
    1
    0

    Can somebody test the input lag in game mode with letest Android 9 and a/v sync off? In Game mode? I

  36. 1
    0
    -1
    0
    0

    Alright so I bought this TV a few years ago and the HDR didn’t give me much problem using a PS4. But since they introduced the update that added the adjustment feature, the HDR has driven me insane since, strangely enough the PS4 also uses these settings for SDR games too when enabled. What drove me nuts are a few things, input lag, how to set the black level correctly, and how to set the bright level, which was invisible on the PS4 adjustment settings. The only way to get this emblem to become visible is to set the Local Dimming to low. Medium can increase detail but will increase input lag. Also another thing I noticed is that the PS4 Pro using HDR 10 on input 2 has a higher brightness setting than the standard PS4. For example in Battlefield 5 I used to set it to 1600 or 16%, on the pro its 1620. I really don’t know what to make of this. Turning off smooth gradation can decrease input lag, but I feel that gameplay is less smooth with it off and that it adds a large amount of depth to the image. As far as other settings go, you should have it set to game mode of course, all settings need to be default. When using HDR adjust, set the first slider (color brightness) until it goes invisible, then move it back one so you can see it.do the same for the second (white level brightness) with local dimming set to low. Now for the black level, turn off your light and look directly at the middle of the screen where the logo is hardest to see, and move it until you can barely see the logo. Don’t tilt your head either. SDR brightness should be 25, HDR should be 100. contrast 90, gamma 0, black level 50, black adjust off, contrast enhancer off, local dimming low, XDR high. Color 55, hue 0, color temp expert 1, live color off. Sharpeness 50, reality creation auto, smooth gradation high. Motion flow off of course. All video options set to auto. I’m only posting this because it might help others.

    Edited 4 years ago: Needed to correct a mistake
  37. 2
    1
    0
    1
    0

    I’d like to bring up that many of us are experiencing extremely dark images when playing DV content: https://us.community.sony.com/s/question/0D50B00005PHdreSAD/dolby-vision-problems-xbr55x900f?language=en_US

    It is in fact so dark that some content is virtually unwatchable. Videos and photos included

    Edited 4 years ago: Edit to mention videos and photos available on the Sony Community thread
  38. 2
    1
    0
    1
    0

    The reason for the low brightness is that the 950g is not a standard Dolby Vision TV. It is equipped with low-latency technology, a problem that Sony can’t solve, but they deliberately conceal it.

  39. 2
    1
    0
    1
    0

    The reason for the low brightness is that the 950g is not a standard Dolby Vision TV. It is equipped with low-latency technology, a problem that Sony can’t solve, but they deliberately conceal it.

    Yes, but i wonder that rtings didn t know that..

  40. 0
    -1
    -2
    0
    -1

    I’ve been pretty frustrated by this issue for months now. The difference is definitely noticeable when using an external source (observed using both an Apple TV 4K and Amazon Fire TV 4K) and is enough to significantly diminish enjoyment of HDR content when using Dolby Vision.

    Note that while people have fixated on brightness, it’s not just brightness that is affected. Colors also come across as far more muted when coming from an external source. Comparing a movie like Toy Story 4 or Captain Marvel on the built-in Disney+ app vs. my Apple TV, it looks noticeably more washed out and dimmer on otherwise identical picture settings.

    After spending a lot of time tweaking settings, I was able to get Dolby Vision via my Apple TV looking VERY close to the native built-in apps by changing the following settings in Dolby Vision mode (otherwise using Rtings’ suggested settings):

    • Gamma: +2
    • Adv. Contrast Enhancer: High
    • Live Color: High

    It’s still not perfect, as I believe peak brightness still doesn’t go quite as high as the native apps, and obviously I’m still not getting the picture “as intended”. However, subjectively speaking and when viewed in a dark room setting, the differences are pretty minor now and I can enjoy Dolby Vision content from my Apple TV without feeling like I’m missing out too much.

    What’s interesting is that this news has actually been out for years, but didn’t seem to make its way into reviews around launch time, otherwise I definitely would have seen this problem before buying:

    https://www.avforums.com/news/sony-dolby-vision-update-for-bravia-tvs-is-apparently-half-baked.14501

    From that article, it’s clear that Dolby Vision Low Latency was created expressly for Sony so they could get away with claiming their TVs were Dolby Vision certified without upgrading their hardware to actually handle the processing correctly, and then hoped everyone else would implement support for it on their ends for external devices. Obviously, that did not actually happen, and now Sony has started including “full fat” Dolby Vision on newer TVs while leaving its prior customers in the mud.

    While it’s ultimately a hardware problem from the looks of it and can/will never be fixed, I still can’t offer Sony a pass. This was a shitty business decision made at the expense of their customers. Additionally, the fact is that Sony’s customer service has offered zero help in resolving the problem properly to anyone who contacts them. Instead of explaining clearly what’s going on and offering alternatives or refunds/replacements, they instead just give nonsense solutions like “reset the TV to defaults”. It’s clear from that pattern of behavior Sony seems to simply want the problem to go away and for people to just buy a new TV to fix it.

    I’m extremely disappointed by the lack of support from Sony, and I will not be purchasing ANY Sony products in the future as a result of these bad experiences. I also strongly urge everyone else here to do the same, as Sony has demonstrated it simply does not stand by its products or customers.

    Edited 4 years ago: Added more information.
  41. 2
    1
    0
    1
    0

    Any update on this?

  42. 2
    1
    0
    1
    0

    Any update on this?

    Unfortunately, we no longer have this TV, so there are no updates on our end other than what’s in the review. Sorry about that!

  43. 2
    1
    0
    1
    0

    Hi Adam, I am experiencing the same with my Sony a80j Oled tv where DV is darker on Apple tv 4K than the native apps. All settings are same and HDMI has enhanced DV enabled in tv settings. I thought a80j support tv-DV and Apple tv 4K support both TV-led and player led so why am I having this difference in brightness? Is this the case for Lg tv’s too? For example Lg C1?