Dear forum members,
We are delighted to inform you that our forums are back online. Every single topic and post are now on their places and everything works just as before, only better. Welcome back!


Discussion on Article:
Intel to Start Selling NUC Barebone Kits in December.

Started by: turtle | Date 11/09/12 04:28:50 PM
Comments: 25 | Last Comment:  11/13/12 10:21:21 PM

Expand all threads | Collapse all threads


As Anand mentioned, what's important right now is that it is out there. Right now Intel needs to focus on the fact not only is it something they will support, but also set a baseline of what the expected feature set should be.

Currently, it doesn't look half bad. It will be interesting what the platform will be capable of not only as the chip set, many i/o functions, and other features currently off-die become part of the cpu, but also as low-power chips evolve. Sure, You may not game on an HD4000 or maybe even Haswell may be relegated to more simple video tasks and serve a very specific user base, but as we move forward, we're not far away (meaning probably 5-6 years from the power of upcoming consoles) from some form of a system like this being enough to have a full-fledged experience...gaming and all. It's kind of ridiculous when you think about far we've come.

If the form-factor becomes an accepted standard, odds are prices will fall substantially as all the pieces, most which are more-or-less specialized currently and expensive for what they are, become more of a commodity. When that day comes, it won't be a silly little trinket you compare to a laptop without a screen, it will be a customized and upgradable media center/port and device hub the size of a phone you can take with you wherever you go and use with any random display device along the way...and that is just freaking rad.
3 2 [Posted by: turtle  | Date: 11/09/12 04:28:50 PM]

" turtle: Right now Intel needs to focus on the fact not only is it something they will support, but also set a baseline of what the expected feature set should be. "

agreed, what is it with Intel anyway, we have known for years their Thunderbolt port IP came with 4 possible ports internally and that its in effect talking to the next device in the FB chain over a virtual PCIe bus and yet we still only get a poultry One Thunderbolt port on PC's years later, you need 2 at the very least so you can daisy chain the devices together.

its like they don't want you to actually buy it and so make Thunderbolt a real world standard.

and why have they still not written the generic Ethernet over Thunderbolt drivers given for instance that their other interesting card is the Knight's Corner Co-Processor that uses vanilla TCP/IP over PCIe internally to/from all its cores.

so put these NC's in your Thunderbolt equipped desktop and talk to the other daisy chained Thunderbolt devices like this with Ethernet over Thunderbolt drivers , its so obvious , why dont Intel write the damn thing and mandate 2 daisy chained Thunderbolt ports per Intel device peripheral from now on,you cant make a standard by price gouging to start with (that comes later like apple) you know Intel should be doing a "special buy one Thunderbolt part get one free type of offer to the OEM's"
"KC has an OS running on it, in this case it is a single Linux image per chip. Larrabee and Knights Ferry, the architecture that preceded KC, used a BSD based OS. The OS runs can be SSH’d to, and can run code just like a standalone CPU, but most users won’t use it in that way. That OS is presented with an interface that looks like a standard TCP/IP software stack, but there is no reason why it could not be hardware based in the future. If you have multiple KC cards in a server, they tunnel vanilla TCP/IP over PCIe.

In case it isn’t obvious, the standard way to do clustering on a rack of PC servers is with MPI, and that uses TCP/IP to talk between nodes."

ohh i see down voted as its too complicated to comprehend perhaps , these NUC are billed as special purpose and price, so Thunderbolt port x2 + generic Ethernet over Thunderbolt drivers = daisy chained generic (up to)10 gigabit TCP/IP Ethernet speed between all Thunderbolt connected (NUC and DT)PC's

if TCP/IP Ethernet over USB with the existing generic Linux USB-eth interface was good enough for the old mobile iPAQ back in the day

then why would you not want and demand a free windows/Linux driver that gives you up to 10 gigabit TCP/IP Ethernet speed between all Thunderbolt connected PC's today given the current price gouging of all Thunderbolt kit today
2 3 [Posted by: sanity  | Date: 11/09/12 11:14:31 PM]
- collapse thread

Why don't you read this piece from an Intel leaning website called Semiaccurate. Thunderbolts are.... no go.

And then for good measure, you might want to digest what AMD are proposing in order to counter Intel's Thunderbolt technology. It's called Lightning Bolt.
1 1 [Posted by: linuxlowdown  | Date: 11/13/12 06:06:16 AM]
im not sure why someone voted you down ,after all you did provide links so i up voted you again for being helpful.

OC you seem to miss the point but thats ok , that being the SOHO/Home users have had to put up with crap single non advancing 1GE/s for many many YEARS...
as the old ethernet vendors just sit back and suck at the teat of High margin 10GE/40GE/100GE without given the SOHO masses far better local LAN/WAN/SAN TCP/IP data throughput , that needs to change ASAP.

OC i also prefered the older 50GB+/s photonics version rather than the slow 15 foot copper 10Gb/s version ,intel has been researching silicon photonics for some time

i dont really care who brings it (cheap nano photonics/10GE+/s) only that they do and have the cash to push it and its related routers etc at a reasonable mass consumer price ASAP IBM, Hewlett-Packard and others Including ARM vendors.

for instance IBM has been exploring its use for connecting transistors on chips, rather than just between larger devices.

0 0 [Posted by: sanity  | Date: 11/13/12 10:21:21 PM]

For the love of GOD, people WTF, is wrong with you PPL?

when amd does that you say ahh lame CPU with mediocre GPU, although we all know that AMD GPU beats the **** out of INTEL HD 4000, and dual cores clocked at 1.8GHz, is an ATOM disguised as i3 to suit your pleasure.

you can do the same with AMD it will cheaper, better, and can be silent:

Also AMD E-450 motherboard + APU costs 110USD and i personally handled that APU, and let me tell you something it can game and perform great, just stupid Intel marketing for stupid people.
3 4 [Posted by: medo  | Date: 11/10/12 02:32:57 AM]
- collapse thread

when amd does that you say ahh lame CPU with mediocre GPU, although we all know that AMD GPU beats the **** out of INTEL HD 4000, and dual cores clocked at 1.8GHz, is an ATOM disguised as i3 to suit your pleasure.

Just because it is clocked at 1.8GHz doesn't mean that it is "an ATOM disguised as i3"

I think most people here know the fact that ivy bridge clock for clock rapes Atom.

Also AMD E-450 motherboard + APU costs 110USD and i personally handled that APU, and let me tell you something it can game and perform great, just stupid Intel marketing for stupid people.

With respect, the only stupid one here is you

Core i3-3217U is better than E-450 in every way possible (including GPU performance)

I'm not even sure if you are serious
4 3 [Posted by: maroon1  | Date: 11/10/12 09:19:31 AM]
show the post
1 5 [Posted by: TA152H  | Date: 11/10/12 12:16:07 PM]
The E-450 is deprecated. It's been usurped by the E2-1800, which is much cheaper to make, and does have the more powerful GPU.

E-450 and E2-1800 are based on same exact architecture. Only difference is E2-1800 run at slightly high clock speeds

E2-1800 uses a rebranded GPU with slightly higher freq than E-450

Have you forgotten the i3-3217u runs the GPU at 350 MHz? Intel had to do that to make the power requirements in line, but doing so made it inferior even to the Bobcat in GPU performance.

Keep in mind that GPU turbo clock of 17 watt Ivy Bridge is close to regular models. For example i3 i3-3217U can turbo up to 1050MHz

Here you can see GPU benchmark for intel ultrabook that uses 17 watt ivy bridge which has HD4000 clocked @ 350MHz (with GPU turbo)

It is not far behind 35w A8-3500 in many cases

Bocat (which is less than half performance of A8-3500) can never match 17w ivy bridge in GPU. Period

Overall though, in most applications it's going to be better, but it's also a LOT more expensive.

You must be kidding. i3-3217U should destroy bocat in almost everything, and in some cases you should expect more than twice the performance (mostly in CPU intensive workloads)

Bocat is meant to compete with Atom, not with ivy bridge.
5 2 [Posted by: maroon1  | Date: 11/10/12 01:45:27 PM]
show the post
1 7 [Posted by: TA152H  | Date: 11/10/12 09:28:48 PM]
Yet the E2-1800 is better at everything than the E-450, which is obsolete. GPU performance is considerably faster and turbos higher. It's interesting that you chose an obsolete part, even though the newer part is available.

E2-1800 CPU: 1.7GHz GPU: 523MHz (680MHz turbo)
E-450 CPU: 1.65GHz GPU: 508Mhz (600MHz turbo)

There is barely any difference between the two

Ivy Bridge can't turbo up to 1050 MHz with 17 watts doing much of anything. The GPU gets destroyed in the graphs you chose, and that's just how it is. If it's using the CPU, it can't do much with the GPU. It's not meant to run a 17 watts and is seriously compromised. The Bobcat has a design that was meant to run at 18 watts, and can actually use it to benefit.

The graph I chose was comparison between 17w IB to 35w A8 3500M (which uses 400 shader GPU)

E-450 & E-2 1800 has less than half the GPU power of A8

Unfortunately, I can't find any review that compares 17 watt ivy bridge to Bocat

However, there is a review that compares Pentium (sandy bridge) to E-450

Believe it or not the GPU in SB based pentium gets higher score than E-450

Those pentium uses 6 EU GPU (equivalent to HD2000). In other words, even a low clocked HD4000 should should smoke these pentiums

4 1 [Posted by: maroon1  | Date: 11/11/12 06:14:36 AM]
Also VIA can to compete with Intel NUC or AMD Bobat (40nm lithography) / Jaguar (28nm lithography)
Etc. ZOTAC ZBOX nano VD01 with VIA Nano X2 processor Length: 5in - 127mm / Width: 5in - 127mm / Depth: 1.77in - 45mm

And if ZOTAC „tomorrow“ upgrade ZBOX nano VD01 to VIA QuadCore (40nm lithography) + new all-in-one chipset VIA VX11H MSP (40nm lithography) with DirectX 11 VIA Chrome 640/645 GPU will be with performance in the middle between Brazos and NUC.

And in 2013 VIA update VIA Isaiah (CN) x86-64 microarchitecture to monolitic refresh VIA QuadCore CN-R (28nm lithography) on a single-chip with new 2MB L3 cache and SIMD up to AVX2 !
3 0 [Posted by: Tralalak  | Date: 11/11/12 08:46:10 AM]
while its cool VIA are still fighting in the windows markets, its a shame even with this new VIA QuadCore that their OSS linux GFX performance is very lacking.

still they may be getting a good infusion of cash soon as they are suing Apple for using their long standing patents, and theres one thing VIA have is good patents.

so perhaps VIA if they have any sense will use some new cash to open these new VIA quad and improve their gfx and provide good OSS drivers.

so the new lower power x86 OSS devs could help them once again become usable in this new co-operative mobile linux market that seems to be rising up to take a lot of the lower/middle x86 market share away today, we shall see.
1 0 [Posted by: sanity  | Date: 11/12/12 04:16:36 PM]
"medo: you can do the same with AMD"

really, so where is the AMD 10 gigabit Thunderbolt interconnect ? equivalent , where is the AMD OSS hardware assisted video Encode, where is the AMD OSS openCL compute Linux library's.

games if you want games then buy a console or a mobile ARM quad core/quad gfx with ARM's new Midgard architecture Mali-T6xx inside
"ARM's licensed CPU cores dominate the mobile space. "

as AMD Linux gaming is crap and intel is not much better there at the moment but at least Intel have many developers contributing to OSS linux gfx and other areas both desktop and mobile such as "ETC2 the new royalty-free alternative to S3TC texture compression and others and is part of what's needed for OpenGL ES 3.0 compliance. With Intel wanting GLES 3.0 for the next Mesa release in early 2013" http://lists.freedesktop....2012-November/029853.html

AMD just sack their most competent long time Linux developers working on future CPU product enablement, compiler optimizations, enhancing Linux virtualization support, and other areas just as their balance the inventory books management take up the ARM Cortex SOC that runs all the Linux derived OS's running on all ARM CPU's never mind the new cortex dual/quad derivatives , brilliant so who is the stupid people investing in AMD products now medo

so ill put my money on Intel for real time x264 highest quality HD encoding,and hardware assisted video Encode/decode, and ARM Quad for probably everything else at home in 2013, its the only valid option today if time matters to you
2 1 [Posted by: sanity  | Date: 11/10/12 04:40:46 PM]
If you had any idea of quality H264 encoding you would not use Intel and about ARM will have to wait and see.Sadly, you are right about Linux developers team. A very bad and strange move done by AMD suicidal management.
1 1 [Posted by: mosu  | Date: 11/11/12 10:34:06 AM]
( ^ and no, it was not me that down voted you, as i prefer to give you more facts to decide for yourself instead)

i said "X" 264 FOR quality as in the OSS software multi threaded assembly encoder that always beats AMD encode times given the same input line when run on i5/i7

(put simply: that means Intel's single generic i5/i7 integer SIMD is faster than AMD's newest x2 dedicated integer units inside the most modern AMD CPU's when used properly in multi threaded apps)

not "H" as in the lower quality hardware assisted block inside the Intel CPU as standard, that Intel HW assisted block does have its uses for quick better than real time encoding for non quality encodes OC as AMD don't provide any HW assisted Encode (even crap visual quality).

and NO, windows only "ATI Avivo™" (with super crap almost unusable quality) is not a real full HW assisted Encode, its software running on the CPU and GPU as proven when it first came available and was locked to AND GPU's, someone released a patched version of that that ran better on Intel+NV GPU's with no UVD inside. AMD PR really did a No. on that UVD to sell their kit

the fact that Intel have released their HW encode/decode documentation also means any OSS dev on any x86 OS can use that to write new SW to HW encode/decode with intel, or even patch x264 parts in the future if they care to.

you cant do the same for even video decode on AMD as they don't even/have never released their docs for the crap " UVD: a dedicated video decode processing unit introduced with ATI Radeon™ HD 2000 series" even though thats been there a lot longer.
1 1 [Posted by: sanity  | Date: 11/12/12 01:31:57 PM]

No USB 3.0 support on this? For that much you'd expect USB 3.0 support.
3 2 [Posted by: SteelCity1981  | Date: 11/10/12 02:12:30 PM]
- collapse thread

if you want and are happy with up to 9 hundred and 99 USB ports software polling your CPU every second all the time then its clear that AMD CPU's and motherboards are for you as the AMD motherboard OEM's cant seem to get enough USB on there to differentiate themselves ,after all it doesn't really matter much any more taking even more vital CPU cycles away from the limited AMD CPU's just to poll USB

but it would be fine for Intel to put a $1 4 port USB3 On there especially at this price as you say today to compliment the hardware DMA polled TB and give their users more options.
1 3 [Posted by: sanity  | Date: 11/10/12 07:34:55 PM]

I don't understand the use-case for this. Selling bare-bones systems is for people who tinker and can build their own, and people like that will know enough to want to build something more capable.

I can see selling something like this all wrapped up with software in a scenario where you take it home, open the box and plug it in.
1 1 [Posted by: KeyBoardG  | Date: 11/10/12 07:39:40 PM]
- collapse thread

agreed, but these versions are clearly limited in their current form for use in 2013.

they can only sit on the end of a Thunderbolt chain for instance like this, and does anyone seriously buy dual core x86 for 2012/13 general use today, i cant even remember the last dual i got its that long ago now

if it had a Quad i5, 2 1GE ports, and 2 Thunderbolt (and lots of sata ,i could have a cheap nice case 3D printed up for it for instance : ) then id say that version bare-bones would fly off the shelves this holiday season as you could make lots of interesting kit in a very short time.

perhaps then the linux OSS devs might even have a reason to re-factor the existing Linux USB-eth code to use Thunderbolt as a virtual 10 Gigabit TCP/IP Ethernet speed daisy chained connection

as it seems the commercial OEM's are not interested putting in a little effort writing and selling generic Ethernet over Thunderbolt drivers with a generic Thunderbolt cable
2 3 [Posted by: sanity  | Date: 11/10/12 08:39:10 PM]

Even if I would like to have computers that small I am completely unable to imagine a case where these Intel NUCs could be useful. You either do want to connect these computers to a display, or you don't. In the first case you need either double-link DVI or DisplayPort to connect a decent monitor. HDMI is limited to 1080 vertical lines, so it is usable only by brainwashed consumers who never read anything lengthy on their computers, so that they believe that the television screens, which have replaced during the last 5-6 years the earlier real computer monitor screens, are good enough as computer displays. For someone who only watches movies or plays games, HDMI would be good enough, but then audio would be needed. In the second case, when you do not use a display, you normally would need at least 2 Gigabit Ethernet ports. So these Intel barebones are not usable in either case.
0 2 [Posted by: Adrian  | Date: 11/11/12 11:37:58 AM]
- collapse thread

INTEL just need to add a few miniPCI slots for user who needs more network connection or audio can get those as miniPCI card.
0 1 [Posted by: idonotknow  | Date: 11/12/12 04:44:46 AM]

a new form factor is in the work. Intel designed one should it be call NIC for next in tel computing. AMD might as well get one up soon or that nic/nuc will not work with amd APU. amd needs one call NAC for next amd computing with it APU just make sure it has double the cores of nic/nuc the consumer will go for more cores just like the current desktop version. knowing amd 8 cores is slow as a turtle but to get my virtualbox server up i will have to get that 8 cores 8350 in the next few months after i get enough $ for it.
0 2 [Posted by: idonotknow  | Date: 11/12/12 04:37:18 AM]


Back to the Article

Add your Comment