Snap! Whap happen?

It looks like you are blocking ads or it doesn't show for some other reason. Ads ensure our revenue stream. If you support our site you can help us by viweing these ads. Thanks.

AnandTech | Intel’s 50Gbps Silicon Photonics Link: The Future of Interfaces

On Tuesday, Intel demonstrated the world’s first practical data connection using silicon photonics – a 50 gigabit per second optical data connection built around an electrically pumped hybrid silicon laser. They achieved the 50 gigabit/s data rate by multiplexing 4 12.5 gigabit/s wavelengths into one fiber – wavelength division multiplexing. Intel dubbed its demo the “50G Silicon Photonics Link.”

Fiber optic data transmission isn’t anything new – it’s the core of what makes the internet as we know it today possible. What makes Intel’s demonstration unique is that they’ve fabricated the laser primarily out of a low-cost, mass-produceable, highly understood material – silicon.

For years, chip designers and optical scientists alike have dreamt about the possibilities of merging traditional microelectronics and photonics. Superficially, one would expect it to be easy – after all, both fundamentally deal with electromagnetic waves, just at different frequencies (MHz and GHz for microelectronics, THz for optics).

On one side, microelectronics deals with integrated circuits and components such as transistors, copper wires, and the massively understood and employed CMOS manufacturing process. It’s the backbone of microprocessors, and at the core of conventional computing today. Conversely, photonics employs – true to its name – photons, the basic unit of light. Silicon photonics is the use of optical systems that use silicon as the primary optical medium, instead of other more expensive optical materials. Eventually, photonics has the potential to supplant microelectronics with optical analogues of traditional electrical components – but that’s decades away.

Until recently, successfully integrating the two was a complex balance of manufacturing and leveraging photonics only when it was feasible. Material constraints have made photonics effective primarily as a long haul means of getting data from point to point. To a larger extent, this has made sense because copper traces on motherboards have been fast enough, but we’re getting closer and closer to the limit.

DailyTech – Nanotechnology Delivers Revolutionary Pumpless Water Cooling

Forget traditional metal block coolers a nanowick could remove 10 times the heat of current chip designs

A collaboration of university researchers and top industry experts has created a pumpless liquid cooling system that uses nanotechnology to push the limits of past designs.

One fundamental computing problem is that there are only two ways to increase computing power — increase the speed or add more processing circuits.  Adding more circuits requires advanced chip designs like 3D chips or, more traditionally, die shrinks that are approaching the limits of the laws of physics as applied to current manufacturing approaches.  Meanwhile, speedups are constrained by the fact that increasing chip frequency increases power consumption and heat, as evidence by the gigahertz war that peaked in the Pentium 4 era.

A team led by Suresh V. Garimella, the R. Eugene and Susie E. Goodson Distinguished Professor of Mechanical Engineering at Purdue University, may have a solution to cooling higher frequency chips and power electronics.  His team cooked up a bleeding edge cooler consisting of tiny copper spheres and carbon nanotubes, which wick coolant passively towards hot electronics.

The coolant used is everyday water, which is transferred to an ultrathin “thermal ground plane” — a flat hollow plate.

The new design can handle an estimated 10 times the heat of current computer chip designs.  That opens the door to higher frequency CPUs and GPUs, but also more efficient electronics in military and electric vehicle applications.

The new design can wick an incredible 550 watts per square centimeter.  Mark North, an engineer with Thermacore comments, “We know the wicking part of the system is working well, so we now need to make sure the rest of the system works.”

The design was first verified with computer models made by Gamirella, Jayathi Y. Murthy, a Purdue professor of mechanical engineering, and doctoral student Ram Ranjan.  Purdue mechanical engineering professor Timothy Fisher’s team then produced physical nanotubes to implement the cooler and test it in an advanced simulated electronic chamber.

Garimella describes this fused approach of using computer modeling and experimentation hand in hand, stating, “We have validated the models against experiments, and we are conducting further experiments to more fully explore the results of simulations.”

Essentially the breakthrough offers pumpless water-cooling, as the design naturally propels the water.  It also uses microfluidics and advanced microchannel research to allow the fluid to fully boil, wicking away far more heat than similar past designs.

This is enabled by smaller pore size than previous sintered designs.  Sintering is fusing together tiny copper spheres to form a cooling surface.  Garimella comments, “For high drawing power, you need small pores.  The problem is that if you make the pores very fine and densely spaced, the liquid faces a lot of frictional resistance and doesn’t want to flow. So the permeability of the wick is also important.”

To further improve the design and make the pores even smaller the team used 50-nm copper coated carbon nanotubes.

The research was published in this month’s edition of the peer-reviewed journal International Journal of Heat and Mass Transfer.

Raytheon Co. is helping design the new cooler.  Besides Purdue, Thermacore Inc. and Georgia Tech Research Institute are also aiding the research, which is funded by a Defense Advanced Research Projects Agency (DARPA) grant.  The team says they expect commercial coolers utilizing the tech to hit the market within a few years.  Given that commercial cooling companies (Thermacore, Raytheon) were involved, there’s credibility in that estimate.

DailyTech – “Proof” That Linux Project Ripped Off Unix Code Released

Leak from biased source obviously will draw skepticism from the open-source community

IBM, which is among the largest firms pushing the open-source Linux operating system, was slammed with a $1B USD lawsuit in 2003 from SCO, one of the owners of a Unix distribution. The lawsuit alleged that IBM ripped off Linux code from the Unix codebase and was “devaluing” it.

The damages eventually swelled to $5B USD, but SCO was defeated when Novell was shown to hold most of the applicable Unix intellectual property and Novell waived the case.  In the end, SCO filed for bankruptcy, and the Novell loss resulted in a ruling that SCO owes Novell $2.35M USD for copyright infringements (a total later bumped to $3.4M USD).

Even as SCO is appealing [PDF] that decision, Kevin McBride, a lawyer and brother of former SCO CEO Darl McBride has released [see comments section] a wealth of documents showing some of the code that SCO claimed IBM’s Linux ripped off.

He writes:

While UNIX ownership rights are still not finally settled (pending SCO’s appeal of Novell’s jury victory in March, 2010) it is certainly my view, after careful review of all these issues, that Linux DOES violate UNIX copyrights, particularly in ELF code and related tools (debugger code, etc.), header file code wherein implementation code (not just the header interface) have been copied verbatim; STREAMS code; etc. that the Linux community use without license. Then there is the entire question of the overall structure and sequence of Linux being almost an exact copy of UNIX.

There should be little question by anyone at this point that Linux uses a LOT of UNIX code. The Linux world thinks that use is permissive. SCO disagreed. That is the only real issue to be discussed here.

Will Novell win the current SCO appeal? Probably. Will Novell donate the UNIX copyrights to the Linux community if it wins the current appeal? Probably–although Novell’s Linux activities have been difficult to predict in recent years. But does Linux violate UNIX copyrights? Yes.

So, in my opinion, Linux users owe Novell–and particularly its excellent Morrison & Forrester legal team–a huge debt for coming to the rescue and keeping Linux a royalty-free product.

And follows up:

SCO submitted a very material amount of literal copying from UNIX to Linux in the SCO v. IBM case. For example, see the following excerpts from SCO’s evidence submission in Dec. 2005 in the SCO v. IBM case:

Tab 422Tab 421Tab 420Tab 419Tab 418Tab 417Tab 416Tab 415Tab 414Tab 413Tab 412Tab 411Tab 410Tab 409Tab 333Tab 332Tab 331;Tab 330Tab 329Tab 255Tab 254Tab 253Tab 252Tab 251Tab 250Tab 249Tab 248Tab 247Tab 246Tab 245Tab 244Tab 243Tab 242Tab 241;Tab 240Tab 239Tab 238Tab 237Tab 236Tab 235Tab 234Tab 233Tab 232Tab 231Tab 230Tab 229.

There was MUCH more submitted in the SCO v. IBM case that I cannot disclose publicly because it is comparison of code produced by IBM under court protective order that prohibits disclosure.

But the court in SCO v. IBM will probably never decide whether use of this (and all the other UNIX code) in Linux was, or was not permissive, because in the SCO v. Novell case, the jury decided in March 2010 that Novell owns the UNIX copyrights, not SCO.

As I mentioned in the reply to Andreas, if you Linux guys want to give credit where credit is due, you should all thank Novell for having the courage to take the case all the way to trial (I thought SCO had a much stronger case the on ownership question) and its legal counsel, Morrison & Forrester, for doing an outstanding job for Novell at trial–Michael Jacobs, Eric Acker and Sterling Brennan.

In case those links no longer work, you can also get a collected archive of the PDFs here.

Looking briefly at the code involved some of it indeed appears to be copied and pasted, or at least designed using common design documents.  The fact that so many named variables match up would certainly indicate that.  However, the order of the code has been rearranged and there have been numerous deletions and insertions in these sections.

Further, some of the segments of code included are pretty generic.  In these cases it is harder to tell whether the code was indeed copied as claimed, or just implemented similarly.

Ultimately, whether the code was copied or not may prove a moot point, as the jury trial resoundingly declared Novell to own the Unix code.  And Novell is not interested in suing IBM at the present.  Unless SCO’s appeal, filed in U.S. Federal 10th Circuit on July 7, 2010 succeeds, this leak may merely prove an interesting footnote in this case, which is of extreme importance to the open source movement.

Just another WordPress site