Snap! Whap happen?

It looks like you are blocking ads or it doesn't show for some other reason. Ads ensure our revenue stream. If you support our site you can help us by viweing these ads. Thanks.

Tag Archives: AnandTech

AnandTech | Intel Iris Pro 5200 Graphics Review: Core i7-4950HQ Tested

The Prelude

As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.

The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects – they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.

Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.

A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 – $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.

Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.

Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.

Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.

Read the full review @ AnandTech.

AnandTech | The Haswell Review: Intel Core i7-4770K & i5-4560K Tested

The Launch Lineup: Quad Cores For All

As was the case with the launch of Ivy Bridge last year, Intel is initially launching with their high-end quad core parts, and as the year passes on will progressively rollout dual cores, low voltage parts, and other lower-end parts. That means the bigger notebooks and naturally the performance desktops will arrive first, followed by the ultraportables, Ultrabooks and more affordable desktops. One change however is that Intel will be launching their first BGA (non-socketed) Haswell part right away, the Iris Pro equipped i7-4770R.

Intel 4th Gen Core i7 Desktop Processors
Model Core i7-4770K Core i7-4770 Core i7-4770S Core i7-4770T Core i7-4770R Core i7-4765T
Cores/Threads 4/8 4/8 4/8 4/8 4/8 4/8
CPU Base Freq 3.5 3.4 3.1 2.5 3.2 2.0
Max Turbo 3.9 (Unlocked) 3.9 3.9 3.7 3.9 3.0
Test TDP 84W 84W 65W 45W 65W 35W
HD Graphics 4600 4600 4600 4600 Iris Pro 5200 4600
GPU Max Clock 1250 1200 1200 1200 1300 1200
L3 Cache 8MB 8MB 8MB 8MB 6MB 8MB
DDR3 Support 1333/1600 1333/1600 1333/1600 1333/1600 1333/1600 1333/1600
vPro/TXT/VT-d/SIPP No Yes Yes Yes No Yes
Package LGA-1150 LGA-1150 LGA-1150 LGA-1150 BGA LGA-1150
Price $339 $303 $303 $303 OEM $303

Starting at the top of the product and performance stack, we have the desktop Core i7 parts. All of these CPUs feature Hyper-Threading Technology, so they’re the same quad-core with four virtual cores that we’ve seen since Bloomfield hit the scene. The fastest chip for most purposes remains the K-series 4770K, with its unlocked multiplier and slightly higher base clock speed. Base core clocks as well as maximum Turbo Boost clocks are basically dictated by the TDP, with the 4770S being less likely to maintain maximum turbo most likely, and the 4770T and 4765T giving up quite a bit more in clock speed in order to hit substantially lower power targets.

It’s worth pointing out that the highest “Test TDP” values are up slightly relative to the last generation Ivy Bridge equivalents—84W instead of 77W. Mobile TDPs are a different matter, and as we’ll discuss elsewhere they’re all 2W higher, but that is further offset by the improved idle power consumption Haswell brings.

Nearly all of these are GT2 graphics configurations (20 EUs), so they should be slightly faster than the last generation HD 4000 in graphics workloads. The one exception is the i7-4770R, which is also the only chip that comes in a BGA package. The reasoning here is simple: if you want the fastest iGPU configuration (GT3e with 40 EUs and embedded DRAM), you’re probably not going to have a discrete GPU and will most likely be purchasing an OEM desktop. Interestingly, the 4770R also drops the L3 cache down to 6MB, and it’s not clear whether this is due to it having no real benefit (i.e. the eDRAM may function as an even larger L4 cache), or if it’s to reduce power use slightly, or Intel may have a separate die for this particular configuration. Then again, maybe Intel is just busily creating a bit of extra market segmentation.

Not included in the above table are all the common features to the entire Core i7 line: AVX2 instructions, Quick Sync, AES-NI, PCIe 3.0, and Intel Virtualization Technology. As we’ve seen in the past, the K-series parts (and now the R-series as well) omit support for vPro, TXT, VT-d, and SIPP from the list. The 4770K is an enthusiast part with overclocking support, so that makes some sense, but the 4770R doesn’t really have the same qualification. Presumably it’s intended for the consumer market, as businesses are less likely to need the Iris Pro graphics.

Intel 4th Gen Core i5 Desktop Processors
Model Core i5-4670K Core i5-4670 Core i5-4670S Core i5-4670T Core i5-4570 Core i5-4570S
Cores/Threads 4/4 4/4 4/4 4/4 4/4 4/4
CPU Base Freq 3.4 3.4 3.1 2.3 3.2 2.9
Max Turbo 3.8 (Unlocked) 3.8 3.8 3.3 3.6 3.6
Test TDP 84W 84W 65W 45W 84W 65W
HD Graphics 4600 4600 4600 4600 4600 4600
GPU Max Clock 1200 1200 1200 1200 1150 1150
L3 Cache 6MB 6MB 6MB 6MB 6MB 6MB
DDR3 Support 1333/1600 1333/1600 1333/1600 1333/1600 1333/1600 1333/1600
vPro/TXT/VT-d/SIPP No Yes Yes Yes Yes Yes
Package LGA-1150 LGA-1150 LGA-1150 LGA-1150 LGA-1150 LGA-1150
Price $242 $213 $213 $213 $192 $192

The Core i5 lineup basically rehashes the above story, only now without Hyper-Threading. For many users, Core i5 is the sweet spot of price and performance, delivering nearly all the performance of the i7 models at 2/3 the price. There aren’t any Iris or Iris Pro Core i5 desktop parts, at least not yet, and all of the above CPUs are using the GT2 graphics configuration. As above, the K-series part also lacks vPro/TXT/VT-d support but comes with an unlocked multiplier.

Obviously we’re still missing all of the Core i3 parts, which are likely to be dual-core once more, along with some dual-core i5 parts as well. These are probably going to come in another quarter, or at least a month or two out, as there’s no real need for Intel to launch their lower cost parts right now. Similarly, we don’t have any Celeron or Pentium Haswell derivatives launching yet, and judging by the Ivy Bridge rollout I suspect it may be a couple quarters before Intel pushes out ultra-budget Haswell chips. For now, the Ivy Bridge Celeron/Pentium parts are likely as low as Intel wants to go down the food chain for their “big core” architectures.

Read the full review @ AnandTech.

AnandTech | MSI Z77A-GD65 Gaming Review

In recent motherboard generations, the ‘in style’ thing to do is to separate the SKU line of a company into several compartments – channel/mainstream, overclocking, budget, smaller-than-ATX, X feature enabled (such as Thunderbolt), and gaming. The latest addition to the gaming scene is MSI, who have recently released their Z77 Gaming range, despite being a stones throw away from Haswell launch.

MSI Z77A-GD65 Gaming Box Board2_678x452

So when a reviewer comes across a product designated ‘gaming’, we are clearly wanting to see and feel why it is a gaming product. This would mean specific features aimed at the gaming crowd, to help reduce lag, boost frame rates, and increase the experience of the whole package. We already have contenders in this space aside from MSI – ASUS has their Republic Of Gamers range which we have rated very highly, Gigabyte has the G1 range, and ASRock wheels out Fatal1ty. Off the back of CeBIT 2013, MSI have launched four gaming boards in the Z77 range: the Z77A-GD65 Gaming, the Z77A-G45 Gaming, the Z77A-G43 Gaming and the B75A-G43 Gaming.

These motherboards come off the back of a successful gaming laptop range for MSI. In the wake of the global depression, every motherboard manufacturer needed to diversify its portfolio in order to cover itself, and MSI did this in the notebook arena. The gaming notebooks feature a red and black color scheme, which seems to be the going rate for gaming product lines:

MSI Z77A-GD65 Gaming Top_575px

From left to right – ASRock Fatal1ty Z77 Professional, MSI Z77A-GD65 Gaming, ASUS Maximus V Formula

The only company that bucks this trend is Gigabyte, aiming for a gaming green instead, or orange for the overclocking range. MSI aim for yellow with their overclocking range – the MPower and Lightning GPUs being the prime examples (the XPower is still relatively undefined in the blue end of the spectrum). However MSI is tying their ranges together, at least in color scheme – the Gaming range will have GPUs featuring a red Twin Frozr 4 cooler, and there have been a lot of images online featuring these two with red-LED Avexir memory.

While MSI have had great success of their GPU lines (the Lightning range constantly breaks overclocking world records and is more often than not the fastest pre-overclocked version of each card), the motherboard range needs a boost. MSI is aimed primarily low to mid-range, as seen by the lack of a Z77 PLX 8747 enabled motherboard in the lineup for three-way and above – even the GD80 and MPower are non-PLX. Thus if they want to release a gaming motherboard, gamers will want the best available, especially if they have that extreme setup. The Z77A-GD65 Gaming, despite being the top of the range so far, is the one we are reviewing today. It hits the line down the middle, going for that single and dual GPU gamer, but given how close we are to Haswell, was it worth the effort?

MSI Z77A-GD65 Gaming Overview

Speaking to MSI Europe, the reason for releasing a Z77 Gaming product line was due to the Haswell delay. They have had plans for a Z87 Gaming range since they got the specifications through for Haswell, but the additional 4-6 month delay means that the gaming range was brought forward. The only issue was that the gaming range on Z87 will have a different naming; the Z77 gaming range is a naming hybrid for now.

One of the first thoughts that popped into my mind when I started this review is ‘this looks like a normal GD65’. There are a large number of similarities:
MSI Z77A-GD65 Gaming Top_575px

In actual fact, we are dealing with almost the exact same layout. Same number of SATA ports, same VRM configuration, same location for OC buttons, USB ports, voltage check points, fan headers, the lot. The difference it seems is in the ‘gaming details’.

Over the base GD65 model we get a Qualcomm Atheros Killer NIC E2205-B gigabit Ethernet controller, a regular feature on the MSI Gaming notebook range. This NIC is designed to offload network features, such at packet priority, onto the NIC itself rather than the CPU, as well as bypassing the Windows network stack for high priority applications. Most motherboards now offer some form of network management tool, however these usually require CPU intervention in order to keep everything in the right order. While I cannot say that a Killer NIC is vital in improving FPS or response times, it could help reduce the ‘user’ end side of the lag in gaming. Though if you are suffering from lag due to your own computer, turn off downloads, Facebook and updates during competitions.

Similar to ASRock’s Fatal1ty range, the MSI Gaming also has a ‘Gaming Device Port’, which should allow for higher polling rate mice (500-1000 Hz) to be used. Whether a higher polling mouse rate is useful is still debatable depending on the frame rate – if you are polling up to 16-32x more than the FPS of the game, the PC has to decide on the average acceleration and location vs. the latest acceleration/location and inject it into the gaming stream appropriately.

Read the full review @ AnandTech | MSI Z77A-GD65 Gaming Review.

AnandTech | The HTC One Review

It is nearly impossible to begin to review the HTC One without some context, and I’ll begin our review of the HTC One (formerly the device known as codename M7) much the same way I did my impressions piece simply by stating that HTC is in an interesting position as a result of last year’s product cycle. If there’s one thing Anand has really driven home for me in my time writing for AnandTech, it’s that in the fast-paced mobile industry, a silicon vendor or OEM really only has to miss one product cycle in a very bad way to get into a very difficult position. The reality of things is that for HTC with this last product cycle there were products with solid industrial design and specs for the most part, but not the right wins with mobile operators in the United States, and not the right marketing message abroad. It’s easy to armchair the previous product cycle now that we have a year of perspective, but that’s the reality of things. HTC now needs a winner more than ever.


HTC One X, HTC Butterfly, HTC One

For 2013 HTC is starting out a bit differently. Rather than announce the entire lineup of phones, it’s beginning with the interestingly-named HTC One. It’s just the HTC One — no S or X or V or any other monikers at all. It’s clear that the HTC One is the unadulterated representation of HTC’s vision for what the flagship of its smartphone lineup should be. HTC is different from other OEMs in that it only makes smartphones, and as a result the flagship clearly defines the rest of the product portfolio below it. With the One it looks as though HTC is making that kind of statement by literally letting it define the entire One brand.

Enough about the position and the strategy for HTC, these are mostly things that are interesting to enthusiasts and industry, but not really relevant to consumers or the review of a singular product. Let’s talk about the HTC One.

Hardware

For whatever reason I always start with industrial design and aesthetics, probably because it’s the most obvious superficial thing that hits you when picking up almost anything for the first time. With a smartphone that’s even more important, since there’s so much that revolves around the in-hand feel. I pick up my phone too many times a day to count for better or worse, thus the material quality and in-hand feel really do make a big difference.

The HTC One’s fit and finish are phenomenal. There, I said it. You almost don’t even need to read the rest of this section. In my books, fit and finish goes, in descending order of quality, metal, glass, and finally plastic. Or instead of plastic, polymer, or polycarbonate, or whatever overly-specific word we use to avoid saying plastic.

I’ve talked with a lot of people about HTC’s lineup last year, and even though the One X was a well constructed plastic phone, the One S really stuck out in my mind for being a level above and beyond in terms of construction and industrial design. I asked Vivek Gowri (our resident Mechanical Engineering slash industrial design connoisseur slash mobile reviewer extraordinaire) if I was crazy, and he agreed that the One S was one of, if not the, best industrial designs of 2012.

So when I heard about M7 being on the horizon as the next flagship, I couldn’t help but worry that there would no longer be a primarily-metal contender at the high end from HTC. The HTC One is that contender, and brings unibody metal construction to an entirely new level. It is the realization of HTC’s dream for an all-metal phone.

HTC begins construction of the One from a solid piece of aluminum. Two hundred minutes of CNC cuts later, a finished One chassis emerges. Plastic gets injected into the chassis between cuts during machining for the antenna bands and side of the case, which also gets machined. The result is HTC’s “zero-gap” construction which – as the name implies – really has no gaps between aluminum and polymer at all for those unibody parts. There’s no matching parts together from different cuts to achieve an optimal fit, everything in the main chassis is cut as one solid unit. It’s the kind of manufacturing story that previously only the likes of Apple could lay claim to, and the HTC One is really the first Android device which reaches the level of construction quality previously owned almost entirely by the iPhone.

Read the ful review @ AnandTech | The HTC One Review.

AnandTech | The Great Equalizer 3: How Fast is Your Smartphone/Tablet in PC GPU Terms

DSC_0081_678x452

For the past several days I’ve been playing around with Futuremark’s new 3DMark for Android, as well asKishonti’s GL and DXBenchmark 2.7. All of these tests are scheduled to be available on Android, iOS, Windows RT and Windows 8 – giving us the beginning of a very wonderful thing: a set of benchmarks that allow us to roughly compare mobile hardware across (virtually) all OSes. The computing world is headed for convergence in a major way, and with benchmarks like these we’ll be able to better track everyone’s progress as the high performance folks go low power, and the low power folks aim for higher performance.

The previous two articles I did on the topic were really focused on comparing smartphones to smartphones, and tablets to tablets. What we’ve been lacking however has been perspective. On the CPU side we’ve known how fast Atom was for quite a while. Back in 2008 I concluded that a 1.6GHz single core Atom processor delivered performance similar to that of a 1.2GHz Pentium M, or a mainstream Centrino notebook from 2003. Higher clock speeds and a second core would likely push that performance forward by another year or two at most. Given that most of the ARM based CPU competitors tend to be a bit slower than Atom, you could estimate that any of the current crop of smartphones delivers CPU performance somewhere in the range of a notebook from 2003 – 2005. Not bad. But what about graphics performance?

To find out, I went through my parts closet in search of GPUs from a similar time period. I needed hardware that supported PCIe (to make testbed construction easier), and I needed GPUs that supported DirectX 9, which had me starting at 2004. I don’t always keep everything I’ve ever tested, but I try to keep parts of potential value to future comparisons. Rest assured that back in 2004 – 2007, I didn’t think I’d be using these GPUs to put smartphone performance in perspective.

 

AnandTech | The Great Equalizer 3: How Fast is Your Smartphone/Tablet in PC GPU Terms.

AnandTech | 3DMark for Android: Performance Preview

3DMark for Android: Performance Preview

As I mentioned in our coverage of GL/DXBenchmark 2.7, with the arrival of Windows RT/8 we’d finally see our first truly cross-platform benchmarks. Kishonti was first out of the gate, although Futuremark was first to announce its cross platform benchmark simply called 3DMark.

Currently available for x86 Windows 8 machines, Futuremark has Android, iOS and Windows RT versions of 3DMark nearing release. Today the embargo lifts on the Android version of 3DMark, with iOS and Windows RT to follow shortly.

Similar to the situation with GL/DXBenchmark, 3DMark not only spans OSes but APIs as well. The Windows RT/8 versions use DirectX, while the Android and iOS versions use OpenGL ES 2.0. Of the three major tests in the new 3DMark, only Ice Storm is truly cross platform. Ice Storm uses OpenGL ES 2.0 on Android/iOS and Direct3D feature level 9_1 on Windows RT/8.

The Android UI is very functional and retains a very 3DMark feel. There’s an integrated results brower, history of results and some light device information as well:

There are two options for running Ice Storm: the default and extreme presets.

3DMark – Ice Storm Settings
Default Extreme
Rendering Resolution 1280×720 1920×1080
Texture Resolution Normal High
Post-processing Quality Normal High

Both benchmarks are rendered to an offscreen buffer at 720p/1080p and then scale up to the native resolution of the device being tested. This is a very similar approach we’ve seen by game developers to avoid rendering at native resolution on some of the ultra high resolution tablets. The beauty of 3DMark’s approach here is the fact that all results are comparable, regardless of a device’s native resolution. The downside is we don’t get a good idea of how some of the ultra high resolution tablets would behave with these workloads running at their native (> 1080p) resolutions.

Ice Storm is divided into two graphics tests and a physics test. The first graphics test is geometry heavy while the second test is more pixel shader intensive. The physics test, as you might guess, is CPU bound and multithreaded.

Before we get to the results, I should note that a number of devices wouldn’t complete the tests. The Intel based Motorola RAZR i wouldn’t run, the AT&T HTC One X (MSM8960) crashed before the final score was calculated so both of those devices were excluded. Thankfully we got the Galaxy S 3 to complete, giving us a good representative from the MSM8960/Adreno 225 camp. Thermal throttling is a concern when running 3DMark. You have to pay close attention to the thermal conditions of the device you’re testing. This is becoming something we’re having to pay an increasing amount of attention to in our reviews these days.

Graphics Test 1

Ice Storm Graphics test 1 stresses the hardware’s ability to process lots of vertices while keeping the pixel load relatively light. Hardware on this level may have dedicated capacity for separate vertex and pixel processing. Stressing both capacities individually reveals the hardware’s limitations in both aspects.

In an average frame, 530,000 vertices are processed leading to 180,000 triangles rasterized either to the shadow map or to the screen. At the same time, 4.7 million pixels are processed per frame.

Pixel load is kept low by excluding expensive post processing steps, and by not rendering particle effects.

Although the first graphics test is heavy on geometry, it features roughly 1/4 the number of vertices from GL/DXBenchmark 2.7’s T-Rex HD test. In terms of vertex/triangle count, even Egypt HD is more stressful than 3DMark’s first graphics test. That’s not necessarily a bad thing however, as most Android titles are no where near as stressful as what T-Rex and Egypt HD simulate.

3DMark - Graphics Test 1

Among Android smartphones, Qualcomm rules the roost here. The Adreno 320 based Nexus 4 and HTC One both do very well, approaching 60 fps in the first graphics test. The Mali 400MP4, used in the Galaxy Note 2 and without a lot of vertex processing power, brings up the rear – being outperformed by even NVIDIA’s Tegra 3. ARM’s Mali-T604 isn’t enough to pull ahead in this test either; the Nexus 10 remains squarely behind the top two Adreno 320 based devices.

Graphics Test 2

Graphics test 2 stresses the hardware’s ability to process lots of pixels. It tests the ability to read textures, do per pixel computations and write to render targets.

On average, 12.6 million pixels are processed per frame. The additional pixel processing compared to Graphics test 1 comes from including particles and post processing effects such as bloom, streaks and motion blur.

In each frame, an average 75,000 vertices are processed. This number is considerably lower than in Graphics test 1 because shadows are not drawn and the processed geometry has a lower number of polygons.

3DMark - Graphics Test 2

As you’d expect, shifting to a more pixel shader heavy workload shows the Galaxy Note 2 doing a lot better – effectively tying the Tegra 3 based HTC One X+ and outperforming the Nexus 7. The Mali-T604 continues to, at best, tie for third place here. Qualcomm’s Adreno 320 just seems to deliver better performance in 3DMark for Android.

3DMark - Graphics

The overall score pretty much follows the trends we saw earlier. Qualcomm’s Adreno 320 leads things (Nexus 4/HTC One), followed by ARM’s Mali-T604 (Nexus 10), Adreno 225 (SGS3), Adreno 305 (One SV), Tegra 3 (One X+/Nexus 7) and finally Mali 400MP4 (Note 2). The only real surprise here is just how much better Adreno 320 does compared to Mali-T604.

Physics Test

The purpose of the Physics test is to benchmark the hardware’s ability to do gameplay physics simulations on CPU. The GPU load is kept as low as possible to ensure that only the CPU’s capabilities are stressed.

The test has four simulated worlds. Each world has two soft bodies and two rigid bodies colliding with each other. One thread per available logical CPU core is used to run simulations. All physics are computed on the CPU with soft body vertex data updated to the GPU each frame. The background is drawn as a static image for the least possible GPU load.

The Physics test uses the Bullet Open Source Physics Library.

3DMark - Physics

3DMark - Physics Test

The physics results give us an indication of just how heavily threaded this benchmark is. The quad-core devices are able to outperform the dual-core Cortex A15 based Nexus 10, despite the latter having far better single threaded performance. The Droid DNA/Optimus G vs. Nexus 4 results continue to be a bit odd, perhaps due to the newer drivers included in the Nexus 4’s use of Android 4.2 vs. 4.1.2 for the other APQ8064 platforms.

Read the full article here: AnandTech | 3DMark for Android: Performance Preview.

AnandTech | NVIDIA GeForce GTX 650 Ti Boost Review: Bringing Balance To The Force

AnandTech | NVIDIA GeForce GTX 650 Ti Boost Review: Bringing Balance To The Force.



To get our weekly geekiness quota out of the way early, the desktop video card industry is a lot like The Force. There are two sides constantly at odds with each other for dominance of the galaxy/market, and balance between the two sides is considered one of the central tenants of the system. Furthermore when the system isn’t in balance something bad happens, whether it’s galactic domination or uncompetitive video card prices and designs.

To that end – and to bring things back to a technical discussion – while AMD and NVIDIA’s ultimate goals are to rule the video card market, in practice they serve to keep each other in check and keep the market as a whole balanced. This is accomplished by their doing what they can to offer similarly competitive video cards at most price points, particularly the sub-$300 market where the bulk of all video card sales take place. On the other hand when that balance is disrupted by the introduction of a new GPU and/or new video card, AMD and NVIDIA will try to roll out new products to restore that balance.

This brings us to the subject of today’s launch. Friday saw the launch of AMD’s Radeon HD 7790, a $149 entry-level 1080p card based on their new Bonaire GPU. AMD had for roughly the last half-year been operating with a significant price and performance gap between their 7770 and 7850 products, leaving the mid-$100 market open to NVIDIA’s GTX 650 Ti. With the 7790 AMD finally has a GTX 650 Ti competitor and more, and left unchallenged this would mean AMD would control the market between $150 and $200.

NVIDIA for their part has no interest in letting AMD take that piece of the market without a fight, and as such will be immediately countering with a new video card: the GTX 650 Ti Boost. Launching today, the GTX 650 Ti Boost is based on the same GK106 GPU as the GTX 650 Ti and GTX 660, and is essentially a filler card to bridge the gap between them. By adding GPU boost back into the mix and using a slightly more powerful core configuration, NVIDIA intends to plug their own performance gap and at the same time counter AMD’s 7850 and 7790 before the latter even reaches retail. It’s never quite that simple of course, but as we’ll see the GTX 650 Ti Boost does indeed bring some balance back to the Force.

NVIDIA GPU Specification Comparison
GTX 660 GTX 650 Ti Boost GTX 650 Ti GTX 550 Ti
Stream Processors
960
768
768
192
Texture Units
80
64
64
32
ROPs
24
24
16
16
Core Clock
980MHz
980MHz
925MHz
900MHz
Boost Clock
1033MHz
1033MHz
N/A
N/A
Memory Clock
6.008GHz GDDR5
6.008GHz GDDR5
5.4GHz GDDR5
4.1GHz GDDR5
Memory Bus Width
192-bit
192-bit
128-bit
192-bit
VRAM
2GB
1GB/2GB
1GB/2GB
1GB
FP64
1/24 FP32
1/24 FP32
1/24 FP32
1/12 FP32
TDP
140W
134W
110W
116W
GPU
GK106
GK106
GK106
GF116
Architecture
Kepler
Kepler
Kepler
Fermi
Transistor Count
2.54B
2.54B
2.54B
1.17B
Manufacturing Process
TSMC 28nm
TSMC 28nm
TSMC 28nm
TSMC 40nm
Launch Price $229 $149/$169 $149 $149

When NVIDIA produced the original GTX 650 Ti, they cut down their GK106 GPU by a fairly large degree to reach the performance and power levels we see with that card. From 5 SMXes and 3 ROP/Memory partitions, GK106 was cut down to 4 SMXes and 2 ROP partitions, along with having GPU boost removed and overall clockspeeds lowered. In practice this left a pretty big gap between the GTX 650 Ti and the GTX 660, one which AMD’s 7850 and now their 7790 serve to fill.

Despite the name GTX 650 Ti Boost, it’s probably more meaningful to call NVIDIA’s new card the GTX 660 light. The GTX 650 Ti Boost restores many of the cuts NVIDIA made for the GTX 650 Ti; this latest 650 has the core clockspeed, memory clockspeed, GPU boost functionality, and ROP partitions of the GTX 660. In fact the only thing differentiating the GTX 660 from the GTX 650 Ti Boost is a single SMX; the GTX 650 Ti Boost is still a 4 SMX part, and this is what makes it a 650 in NVIDIA’s product stack (note that this means GTX 650 Ti Boost parts will similarly have either 2 or 3 GPCs depending on which SMX is cut). Because clockspeeds are identical to the GTX 660, the GTX 650 Ti Boost will be shipping at 980MHz for the base clock, 1033MHz for the boost clock, and 6GHz for the memory clock.

The result of this configuration is that the GTX 650 Ti Boost is much more powerful than the name would let on, and in practice is closer to the GTX 660 in performance than it is the GTX 650 Ti. Compared to the GTX 650 Ti, the GTX 650 TI Boost has just 106% of the shading/texturing/geometry throughput, but due in large part to the return of the 3rd ROP partition, ROP throughput has been boosted to 159%. Meanwhile thanks to the combination of higher memory clocks and the full 192bit memory bus, memory bandwidth has been increased to 166% of the GTX 650 Ti’s. Or compared to a GTX 660, the GTX 650 Ti Boost has 100% the ROP throughput, 100% the memory bandwidth, and 80% of the shading/texturing/geometry performance. The end result being that in memory/ROP bound scenarios performance will trend close to the GTX 660, while in shader/texture/geometry bound situations performance will easily exceed the GTX 650 Ti’s performance by 6-16%, depending on where GPU boost settles at.

Of course GTX 660-like performance does come with some tradeoffs. While the GTX 650 Ti was a 110W TDP part, the GTX 650 Ti Boost will be a 134W part, just shy of the 140W GTX 660. The GTX 650 Ti Boost runs at the same clockspeeds and the same voltages with the same amount of RAM as the GTX 660, meaning the power savings are limited to whatever power is saved from fusing off that SMX, which in practice will not be all that much. Even by NVIDIA’s own reckoning they’re minimal. So what we’re effectively looking at is a somewhat slower GTX 660 operating at near-GTX 660 power levels.

Driving home the point that the GTX 650 Ti Boost is a reconfigured GTX 660, with the TDP being held at 140W NVIDIA and their partners will be recycling their GTX 660 designs for NVIDIA’s new card. Our reference card is identical to our GTX 660 reference card, and the same can be said for many partner designs. Partners need to provide the same power and cooling to the GTX 650 Ti Boost as they do the GTX 660, so there’s little point in rolling new designs and in fact this helps NVIDIA and their partners get the GTX 650 Ti Boost to market sooner.

AnandTech | ASUS Maximus V Formula Z77 ROG Review

IMGP8832_678x452

AnandTech | ASUS Maximus V Formula Z77 ROG Review.

The motherboard market is tough – the enthusiast user would like a motherboard that does everything but is cheap, and the system integrator would like a stripped out motherboard that is even cheaper.  An overclocker would like a minimalist setup that can push the limits of stability, and the gamer would like an all singing, all dancing everything.  The ASUS Maximus V Formula is designed to cater mainly to the gamer, but also to the enthusiast and the overclocker, for an all-in-one product with a distinct ROG feel.  With the combination air/water VRM cooling system, a mini-PCIe combo card with dual band WiFi and an mSATA port, one of the best on-board audio solutions and the regular array of easy-to-use BIOS/Software, ASUS may be onto a winner – and all they ask for is $270-300.

Overclocking for Z77 – Why Focus on Extreme Overclockers?

The motherboard market shrank in 2012, with reports suggesting that from the 80 million motherboards sold in 2011, this was down to 77 million worldwide in 2012.  In order to get market share, each company had to take it from someone else, or find a new niche in an already swollen industry.  To this extent, after the success of the ROG range, the top four motherboard manufacturers now all have weapons when it comes to hitting the enthusiast or power user with an overclocking platform.  These weapons are (with prices correct as of 3/7):

$400 – Gigabyte Z77X-UP7 (our review)
$379 – ASUS Maximus V Extreme
$290 – ASUS Maximus V Formula
$225 – ASRock Z77 OC Formula (our review, Silver Award)
$200 – ASUS Maximus V Gene
$190 – MSI Z77 MPower (our review)

There are two main differentiators between the low (<$300) and the high (>$350) end.  The first is the inclusion of PLX PEX 8747 chip, to allow 3-way or 4-way GPU setups.  We covered how the PLX chip works in our 4-board review here, but this functionality can add $30-$80 onto the board (depending on the bulk purchase order of the manufacturer and the profit margins wanted).  The second is usually attributed to the functionality and power delivery – the 32x IR3550s used on the Gigabyte Z77X-UP7 costs them a pretty penny, and the extensive feature list of the ASUS ROG boards usually filters through.

AnandTech | AMD Radeon HD 7790 Review Feat. Sapphire: The First Desktop Sea Islands

AnandTech | AMD Radeon HD 7790 Review Feat. Sapphire: The First Desktop Sea Islands.


In an industry that has long grown accustomed to annual product updates, the video card industry is one where the flip of a calendar to a new year brings a lot of excitement, anticipation, speculation, and maybe even a bit of dread for consumers and manufacturers alike. It’s no secret then that with AMD launching most of their Radeon HD 7000 series parts in Q1 of 2012 that the company would be looking to refresh their product lineup this year. Indeed, they removed doubt before 2012 even came to a close when they laid out their 8000M plans for the first half of 2013, revealing their first 2013 GPU and giving us a mobile roadmap with clear spots for further GPUs. So we have known for months that new GPUs would be on their way; the questions being what would they be and when would they arrive?
The answer to that, as it turns out, is a lot more complex than anyone was expecting. It’s been something of an epic journey getting to AMD’s 2013 GPU launches, and not all for good reasons. A PR attempt to explain that the existing Radeon HD 7000 series parts would not be going away backfired in a big way, with AMD’s calling their existing product stack “stable through 2013” being incorrectly interpreted as their intention to not release any new products in 2013. This in turn lead to AMD going one step further to rectify the problem by publically laying out their 2013 plans in greater (but not complete) detail, which thankfully cleared a lot of confusion. Though not all confusion and doubt has been erased – after all, AMD has to save something for the GPU introductions – we learned that AMD would be launching new retail desktop 7000 series cards in the first half of this year, and that brings us to today.
Launching today is AMD’s second new GPU for 2013 and the first GPU to make it to the retail desktop market: Bonaire. Bonaire in turn will be powering AMD’s first new retail desktop card for 2013, the Radeon HD 7790. With the 7790 AMD intends to fill the sometimes wide chasm in price and performance between their existing 7770 (Cape Verde) and 7850 (Pitcairn) products, and as a result today we’ll see just how Bonaire and the 7790 fit into the big picture for AMD’s 2013 plans.

AMD GPU Specification Comparison
AMD Radeon HD 7790 AMD Radeon HD 7850 AMD Radeon HD 7770 AMD Radeon HD 6870
Stream Processors 896 1024 640 1120
Texture Units 56 64 40 56
ROPs 16 32 16 32
Core Clock 1000MHz 860MHz 1000MHz 900MHz
Memory Clock 6GHz GDDR5 4.8GHz GDDR5 4.5GHz GDDR5 4.2GHz GDDR5
Memory Bus Width 128-bit 256-bit 128-bit 256-bit
VRAM 1GB 2GB 1GB 1GB
FP64 1/16 1/16 1/16 N/A
Transistor Count 2.08B 2.8B 1.5B 1.7B
Target Board Power ~85W 150W (TDP) ~80W 151W (TDP)
Manufacturing Process TSMC 28nm TSMC 28nm TSMC 28nm TSMC 40nm
Architecture GCN 1.1* GCN 1.0 GCN 1.0 VLIW5
Launch Date 03/22/2013 03/05/2012 02/15/2012 10/21/2010
Launch Price $149 $249 $159 $239

Diving right into things like always, Bonaire is designed to be an in-between GPU; something to go between the 10 Compute Unit Cape Verde GPU, and the 20 CU Pitcairn GPU. Pitcairn, as we might recall, is almost entirely twice the GPU that Cape Verde is. It has twice as many shaders, twice as many ROPs, twice as many geometry processors, and twice as wide a memory bus. Not surprisingly then, the performance gap between the two GPUs at similar clockspeeds approaches that two-fold difference, and even with binning and releasing products like the 7850 this leaves a fairly large gap in performance.

As AMD intends to carry the existing Southern Islands family forward into 2013, their strategy for the mid-to-low end of the desktop market has become one of filling in that gap. This is a move made particularly important for AMD due to the fact that NVIDIA’s GK106-powered GeForce GTX 650 Ti sits rather comfortably between AMD’s 7770 and 7850 in price and performance, robbing AMD of that market segment. Bonaire in turn will fill that gap, and the 7790 will be the flagship desktop Bonaire video card.

So what are we looking at for Bonaire and the 7790? As the 7790 will be a fully enabled Bonaire part, what we’ll be seeing with the 7790 today will be everything that Bonaire can offer. On the specification front we’re looking at 14 CUs, which breaks down to 896 stream processors paired with 56 texture units, giving Bonaire 40% more shading and texturing performance than Cape Verde. As a further change to the frontend, the number of geometry engines and command processors (ACEs) has been doubled compared to Cape Verde from 1 to 2 each, giving Bonaire the ability to process up to 2 primitives per clock instead of 1, bringing it up to parity with Pitcairn and Tahiti. Finally, the backend remains unchanged; like Cape Verde, Bonaire has 16 ROPs attached to a 128bit memory bus, giving it equal memory bandwidth and equal ROP throughput at equivalent clockspeeds.

Moving on to the 7790 in particular, the 7790 will be shipping at a familiar 1GHz, the same core clockspeed as the 7770. So all of those performance improvements due to increases in functional units translate straight through – compared to the 7770, the 7790 has 40% more theoretical compute/shading performance, 40% more texturing performance, 100% more geometry throughput, and no change in ROP throughput. Meanwhile in a move mirroring what AMD did with the 7970 GHz Edition last year, AMD has bumped up their memory clocks. 7790 will ship with a 6GHz memory clock thanks to a higher performing (i.e. not from Cape Verde) memory interface, which compared to the 7770’s very conservative 4.5GHz memory clock means that the 7790 will have 33% more memory bandwidth compared to 7770, despite the fact that the memory bus itself is no wider.

Putting it altogether, so as long as the 7790 is not ROP bottlenecked, it stands to be 33%-100% faster than the 7770. Or relative to 7850, the 7790 offers virtually all of the 7850’s texturing and shading performance (it’s actually 2% faster), while offering only around 60% of the memory bandwidth and ROP throughput.

On the power front, unsurprisingly power consumption has gone up a bit. As a reminder, AMD does not quote TDPs, but rather “typical board power”, which is AMD’s estimate for what power consumption will be like under an average workload. 7770’s official TBP is 80W, while 7790’s is 85W. We’ll have our own breakdown on this in our look at power, temperature, and noise, but it’s fair to say that 7790 draws only a small amount of additional power over the 7770. Ultimately this can be attributed to the fact that while Bonaire is a larger chip, it’s not extremely so, with only the addition of the CUs and additional geometry/ACE pipeline separating the two. Mixed with gradual improvements over the last year on TSMC’s 28nm process, and better power management from AMD, and it’s possible to make these kinds of small improvements while not pushing load power too much higher.

On the note of Bonaire versus Cape Verde, let’s also talk a bit about transistor count and die sizes. Unsurprisingly, Bonaire sits between Cape Verde and Pitcairn in transistor count and die size. Altogether Bonaire comes in at 2.08B transistors, occupying a 160mm2 die. This is as compared to Cape Verde’s 1.5B transistors and 123mm2 die size, or Pitcairn’s 2.8B transistors and 212mm2 die size. For AMD their closest chip in terms of die size in recent history would be Juniper, the workhorse of the Evergreen family and the Radeon HD 5770, which came in at 166mm2.

Moving on, as is consistent with AMD’s previous announcements, the 7790 is being launched as just that: the 7790. AMD has told us that they intend to keep the HD 7000 brand in retail this year due to the success of the brand, and to that end our first Bonaire card is a 7700 series card. The namespace collision is unfortunate – sticking with the 7000 series means AMD is facing the pigeonhole principle and has to put new GPUs in existing sub-series – but ultimately this is something AMD shouldn’t have any real problems executing on. We’ll get into the microarchitecture of Bonaire on our next page, but for gamers and other consumers Bonaire may as well be another member of the Southern Islands GPU family, so it fits in nicely in the 7000 series despite being from a new wave of GPUs.

With that in mind, let’s talk about product positioning and pricing. The 7790 will launch at $149, roughly in between the 7770 and the 7850. AMD will be positioning it as an entry-level 1080p graphics card, and though it’s a 7700 series part its closest competition in AMD’s product stack is more likely to be the 7850, which it’s closer to on the basis of both price and performance.

Against the competition, the 7790’s closest competition will be the GeForce GTX 650 Ti. However with the price of that card regularly falling to $130 and lower, the 7790 is effectively carving out a small niche for itself where it will be a bit ahead of the GTX 650 Ti in both performance and in price. NVIDIA’s next card up is the GTX 660, at more than $200.

For anyone looking to pick up a 7790 today, this is being launched ahead of actual product availability (likely to coincide with GDC 2013 next week). Cards will start showing up in the market on April 2nd, which is about a week and a half from now. Notably, AMD and their partners will be launching stock clocked and factory overclocked parts right away, and from what we’re being told factory overclocked cards will be prolific from day one. Overall we’re expecting this launch to be a lot like the launch of the GTX 560, where NVIDIA did something very similar. In which case we should see both stock and factory overclocked parts right away with more factory overclocked parts than stock parts, and if it does play out like the 560 then stock clocked cards would become a larger piece of the 7790 inventory later in the lifetime of the 7790.

Finally, AMD is wasting no time in extending their Never Settle Reloaded bundle to the 7790. As the 7790 is a cheaper card it won’t come with as many games as the more expensive Radeon cards, but for 7790 buyers they will be receiving a voucher for Bioshock Infinite with their cards. MSRPs/values are usually a poor way to look at the significance of game bundles, but it goes without saying that it’s not too often that $150 cards come with brand-new AAA games.

Spring 2013 GPU Pricing Comparison
AMD Price NVIDIA
$219 GeForce GTX 660
Radeon HD 7850 $179
Radeon HD 7790 $149
$134 GeForce GTX 650 Ti
Radeon HD 7770 $109 GeForce GTX 650
Radeon HD 7750 $99 GeForce GT 640

Read the full review @ AnandTech

AnandTech | Toshiba Announces THNSNF Series SSDs: 19nm NAND Is Here

ssd_575px
Toshiba announced their THNSNF SSD series today. The announcement was long overdue as currently Toshiba’s fastest SATA offering is the HG3 series, which was released in January 2010. The name of the new series is certainly not the most user friendly but it should be kept in mind that so far Toshiba has only sold their SSDs to OEMs, so the naming is not that important.
THNSNF will finally bring SATA 6Gb/s support to Toshiba SSDs, and the series is based on Toshiba’s own controller. Toshiba has definitely taken their time developing this controller considering we saw the first SATA 6Gb/s SSDs (Crucial’s RealSSD C300) in early-2010—over two years ago—and the first such SSDs hit retail (albeit with some growing pains) in February 2010. The actual model number of the controller is still unknown, but it’s possible that it’s the same controller (TC58NC5HJGSB-01) that surfaced in IO-Data’s SSDs a couple of months ago. On the other hand, Toshiba is known for quality and reliability with their SSDs, so it’s not that surprising that it took this long for them to test and validate a SATA 6Gb/s contoller—it can easily take over a year of validation to make sure everything works properly.
On top of the brand new controller, Toshiba is also using their own state of the art 19nm Toggle-Mode 2.0 MLC NAND. Some of Toshiba’s 24nm NAND used Toggle-Mode 2.0 interface as well so it’s not brand new, but at 400MB/s it’s faster than what ONFi can provide at this point. Toshiba is in fact the first SSD company to announce SSDs based on sub-20nm NAND, though we should start seeing 64Gb 20nm IMFT NAND soon unless Intel and Micron have issues with the new process node. Here’s the overview of the new Toshiba SSDs.

Toshiba THNSNF Series Specifications
Model Number THNSNFxxxGBSS THNSNFxxxGCSS THNSNFxxxGMCS
Form Factor 2.5″ 9.5mm 2.5″ 7mm mSATA
Capacities 64GB, 128GB, 256GB, 512GB 64GB, 128GB, 256GB
Sequential Read 524MB/s
Sequential Write 461MB/s (440MB/s for 64GB)
4K Random Read 80K IOPS (50K IOPS for 64GB)
4K Random Write 35K IOPS (25K IOPS for 64GB)

The ‘xxx’ in the model numbers represent the capacity of the drive, so a real model number would look like TNSNF256GBSS for a 256GB 2.5″ 9.5mm THNSNF drive for example. In the light of performance, it seems that Toshiba’s decision not to rush the controller has resulted in good returns. Random write IOPS could be better but other specifications look very, very promising. 440MB/s sequential write for a 64GB SSD would make the THNSNF one of the fastest 64GB SSDs on the market.
Toshiba is also touting the THNSNF series as very power efficient and they claim a power consumption of less than 0.1W in the press release. The press release does not mention how the power consumption was tested but even for idle power consumption 0.1W is extremely low—so far the best we have tested is 0.27W. Utilizing smaller process node NAND obviously helps with power consumption, but Toshiba must have paid a lot attention to power consumption in their controller and firmware design as well.
Today Toshiba is only making a product announcement as the THNSNF series is not even in production yet. According to the press release, mass production will begin in August 2012 and hence availability should be later in 2012. Again, I would like to emphasize that Toshiba has only sold their SSDs to other OEMs, so it’s likely that you won’t see these drives in stores. However, another SSD OEM may buy and rebrand the THNSNF series, which is what Kingston did with their SSDNow V+100 series.
As a final thought, Apple is a huge OEM that has been getting most of their SSDs from Toshiba. All Macs except the MacBook Air come with Toshiba HG3 SSD if the buyer chooses to configure their Mac with an SSD. MacBook Air SSDs are sourced from both Toshiba and Samsung, mainly to avoid component shortage given the popularity of the MacBook Air, though our own testing revealed the Samsung-equipped MBA’s offered better performance. The THNSNF series would be a logical upgrade path for Apple, though on the other hand the availability is later this year; Samsung has been shipping their 830 SSD series for nearly a year now. Now that mobile Ivy Bridge is out, we should see where Apple is going in matter of months, maybe even weeks. Either way, Macs are in need of SATA 6Gb/s SSDs and it’s always possible that Apple will surprise us by going with a totally different brand. Or who knows, maybe they have developed something in the house after the Anobit acquisition?

Source: Toshiba Press Release

Source: AnandTech.

Just another WordPress site