Advertisement

The Next Huge Leap in Computing Power?

Started by April 09, 2014 04:21 AM
43 comments, last by Prefect 10 years, 6 months ago

Hi,

About ten years ago, tech gurus were saying that this or that technology is right around the corner which would take computers into a huge gain in performance. When will the next technology come and what will it be? Are we really stuck for a while? Is it hardware, software, or both? I am surprised that the conspiracy theorists aren't all over this! LOL

Personal life and your private thoughts always effect your career. Research is the intellectual backbone of game development and the first order. Version Control is crucial for full management of applications and software. The better the workflow pipeline, then the greater the potential output for a quality game. Completing projects is the last but finest order.

by Clinton, 3Ddreamer

Search for a Gamespot video of Tim Sweeney, he tries to answer some of these questions, one of the things he was saying was about stacking silicon vertically, stacking layer upon layer of chips, also a bit about quantum computing, I think it's pretty much certain that INTEL/AMD will find a way to keep increasing power until technologies like quantum computer are made commercially viable.

You can also sift through some of these, http://en.wikipedia.org/wiki/List_of_emerging_technologies , things like Memristors, Quantum dot screens, Spintronics, 3D integrated circuits, Transistors made from Carbon nanotubes (could run at 1Thz). But of course these could all take decades to commercialize.

The question then is where can 3D graphics go after full Path Tracing is possible in real-time. The answer would be, just more and more emulation of nature, to the extent where we have a 4D world, we have a real-time time aspect where nature would grow as it would naturally, for example you could watch grass actually growing, you could have extremely accurate weather patterns, extremely accurate AI and Physics instead of the many static objects and approximations we have. Turn everything into a full dynamic living world. But that would take an extraordinary amount of CPU power.

I think one of the things that will eventually happen is the GPU will disappear when CPUs are powerful enough to handle all computation, after all a GPU is just a partially, fixed pipeline, CPU.

But we are limited by our natural senses, we can view up to about 11million colours which we have obviously already passed, can't distinguish after about 70FPS, I think Tim Sweeney mentioned that 8k screens will be some kind of limit to our vision that we can't surpass(check the video).

Just pump more and more CPU power till we have a living, breathing world of CGI graphics standards. But there must come a limit after all we are limited by our own senses, unless we start to genetically engineer ourselves into super beings with extra senses.

Advertisement

The question then is where can 3D graphics go after full Path Tracing is possible in real-time.

I don't think this is where 3D graphics is going, and more importantly I don't think it would actually solve any of the important problems in real-time graphics. If we had full path/ray tracing right now, at very high speeds, it would not improve visuals at all. On top of that, the biggest drawbacks of that approach actually involve what happens with memory and memory bandwidth rather than computing power. Betting on memory bandwidth over computational power is not smart or savvy IMO.

The answer would be, just more and more emulation of nature, to the extent where we have a 4D world, we have a real-time time aspect where nature would grow as it would naturally, for example you could watch grass actually growing, you could have extremely accurate weather patterns, extremely accurate AI and Physics instead of the many static objects and approximations we have. Turn everything into a full dynamic living world. But that would take an extraordinary amount of CPU power.

Sounds cool on paper. Makes no sense in practice. As an exercise, go spend some time watching grass grow. Next and somewhat more interesting, watch a time lapse video of grass growing. Kinda cool to watch for a few minutes, but ultimately not an interesting development. More seriously, these types of problems aren't limited by compute power in the first place. We lack a far more fundamental understanding of the process, and those who think that enough CPU power and machine learning will "solve" the problem are going to be disappointed.


I think one of the things that will eventually happen is the GPU will disappear when CPUs are powerful enough to handle all computation, after all a GPU is just a partially, fixed pipeline, CPU.

I've been hearing that story as long as I've been listening to talk about graphics and hardware. Nearly fourteen years now. Yet the reality of the situation is that monolithic CPUs are less important than they ever were, whereas GPUs have revolutionized a wide variety of fields over the last ten years. Betting against the GPU is a foolish move.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

I think one of the things that will eventually happen is the GPU will disappear when CPUs are powerful enough to handle all computation, after all a GPU is just a partially, fixed pipeline, CPU.

GPUs these days have extremely little amounts of fixed pipeline left in them. WIth compute shaders, they're now basically an extremely-SIMD, massively hyper-threaded, many-core, RISC co-processor, with hierarchical and/or NUMA memory and an asynchronous DMA controller to boot.

CPUs are never going to replace such a thing - unless it becomes a GPU, which won't happen, because then they won't run legacy code efficiently.

Most of our code is stuck on the CPU because we all learned a particular way of writing software, and not enough people have re-learned how to engineer their software for other kinds of hardware architectures yet.

We still assume that it's a good abstraction for any bit of our code to be able to have a pointer to any bit of our data, that RAM is one huge big array of data, and that the CPU can operate directly on that data.

In reality, CPU makers have done a tonne of magic behind the scenes to try and make that abstraction seem like it works -- they mirror RAM into small caches, otherwise our software would be 1000x slower, they run statistical prediction algorithms to try and guess which bits of RAM to mirror at what times, they reorder our code to try and hide memory access latencies, they speculatively start executing branches that might not actually be taken, and then insert invisible stalls and fences to ensure that after all this parallel guesswork behaves just like the hypothetical, serial abstract machine that your C code was written for.

Instead of wasting transistors on all that magic, the SPE design discarded it all and spent their transistors on adding more cores. The cores didn't do magic behind the scenes - instead requiring programmers to write code differently. Instead of a magic cache that sometimes makes RAM seem fast, when it works, they gave us an asynchronous DMA controller, which lets you perform a non-blocking memcpy between RAM/cache and then poll to see if it's completed. You have to explicitly tell the CPU in advance that you need to move some data from RAM to cache so that the CPU can operate on it, instead of having the CPU pretend that it can operate on RAM (when it can't) and doing guesswork to operate a cache for you. They removed the statistical branch prediction magic, and instead relied on the programmer annotating their if statements as to whether they were statistically more likely to be true or false. If you need to write some results to RAM, you don't need the CPU to write out each word one by one, waiting for them to be written to cache -- you can kick off an async memcpy and continue doing useful work while the parallel DMA hardware does the work in the background.

The result was a CPU that required a radically different approach to writing software (arguably better, arguably worse, just different), but offered a completely insane amount of performance due to the completely different design.

But we can't have such a thing because of inertia. We're stuck with CPUs continuing to emulate designs that we decided on in the 80's, because that's the hypothetical abstract machine that our programming languages are designed around.

GPUs on the other hand have inertia of their own. The popularity of computer graphics has ensured that every single PC now has a GPU inside it. Computer graphics people were happy to learn how to write their code differently in order to gain performance, giving the manufacturers unlimited freedom to experiment with different hardware designs. The result is a huge amount of innovation and actual advancement in processing technology, and in parallel software engineering knowledge.

Because they've been so successful, and aren't going anywhere soon (because CPU architectures are designed so differently that they can't compete for parallel workloads) everyone else now has a chance to learn how to write their programs in the ultra-wide-SIMD compute shader pattern, if they care to.


In the last generation of game engines, we saw systems that have traditionally been single-threaded (and there's a lot of people who were preaching "games are inherently single-threaded, multi-core won't help!") be replaced by multi-core and NUMA-compatible systems.
In the next generation of game engines, we're going to see many of these systems move off the CPU and over to the GPU compute hardware instead.

The fact is, that in terms of ops per joule, GPUs, and processors such as the SPE's are far, far ahead of CPUs by design. Note that the PS3's Cell CPU is, what, 8 years old now, but it still matches a modern Core i7 in terms of FLOPS due to a more efficient (but non-traditional-CPU-like) design...

I am very interested in advances in voxel rendering.

I'm not talking about the boring minecraft style rendering.

I've seen some incredibly demos recently and I think that's one path forward which has real promise.

The 'tech gurus missed one important thing: most computer users aren't tech gurus and don't play the latest and greatest PC games.

For them, computers got 'good enough' about ten years ago, honestly. Why upgrade when there's no need to? If you have a machine that will do eveything you need and want it to do, why spend more money for things that aren't going to help you any in what you do? GPU's could become ten times faster and hold sixteen times the RAM they do now. That won't make your word processor type any faster, or pop your email up quicker than it does now, or even display 99.99% of the webpages any better than they do now.

Why do you think people are holding on to XP (putting aside the issues with Windows 8)? It works. All their software runs. Life is Good. They aren't going to spend hundreds to a thousand dollars for a new computer (the only way they know how to 'upgrade' an OS) to do the same things they're doing today. As long as they can look at photos, listen to music, watch a movie, and do their email, facebook, and farmville, what more could they possibly want?

"The multitudes see death as tragic. If this were true, so then would be birth"

- Pisha, Vampire the Maquerade: Bloodlines

Advertisement

I agree with Mouser9169.

For the average PC user the technology got where it needed to be around the time of the Pentium 4 or Pentium D. Since then the speed increases don't really affect anybody who isnt a gamer or a power user. And even in games and 3d modeling / CAD there hasn't really been much for the past 5 - 8 years that has really required a massive increase in power.

If anything the main areas of research for computing tech is how can we get the stuff we have now smaller and using less battery power so that it can be used in a mobile or wearable device.

You can also sift through some of these, http://en.wikipedia.org/wiki/List_of_emerging_technologies , things like Memristors [...]


Interesting, these memristors. A quote from Wikipedia, under the "Applications" section:

They can potentially be fashioned into non-volatile solid-state memory, which would allow greater data density than hard drives with access times similar to DRAM, replacing both components.%5B55%5D HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter,%5B9%5D and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm3).%5B56%5D In May 2008 HP reported that its device reaches currently about one-tenth the speed of DRAM.%5B57%5D The devices' resistance would be read with alternating current so that the stored value would not be affected.%5B58%5D In May 2012 it was reported that access time had been improved to 90 nanoseconds if not faster, approximately one hundred times faster than contemporaneous flash memory, while using one percent as much energy.%5B59%5D


1 petabit per cm3, just think of all of the... erm.. things you could store!
"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

I think one of the things that will eventually happen is the GPU will disappear when CPUs are powerful enough to handle all computation, after all a GPU is just a partially, fixed pipeline, CPU.

I don't think it will disappear but I do think they're heading for a convergence. GPUs are now essentially at the stage where they can do anything, and the new APIs are going to (help) unlock the lower-level power, but the biggest remaining bottlenecks are bandwidth and shuffling data between different types of memory (and even between different buffers in the same memory). That's the next thing that's going to need to fall.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I think one of the things that will eventually happen is the GPU will disappear when CPUs are powerful enough to handle all computation, after all a GPU is just a partially, fixed pipeline, CPU.


I don't think it will disappear but I do think they're heading for a convergence. GPUs are now essentially at the stage where they can do anything, and the new APIs are going to (help) unlock the lower-level power, but the biggest remaining bottlenecks are bandwidth and shuffling data between different types of memory (and even between different buffers in the same memory). That's the next thing that's going to need to fall.


I'd also like to point out that they are designed to solve completely different problems. A CPU is optimised for solving a variety of sequential problems, while the GPU is optimised to solve an incredible amount of identical parallelisable problems.

Saying the "GPU can do everything" may be true, but it doesn't mean it's efficient or correct. The GPU will never be used for I/O operations, nor will it ever be used to handle code execution of today's software, or anything of the likes. The GPU is inefficient if you try to make it execute threads that don't do the same thing, or require approximately the same time to finish.

I also don't see the CPU taking over the GPU's job, because again, the CPU's architecture, including the RAM located around it, just isn't designed for graphics processing.
"I would try to find halo source code by bungie best fps engine ever created, u see why call of duty loses speed due to its detail." -- GettingNifty

This topic is closed to new replies.

Advertisement