News
 

Bookmark and Share

(25) 

Intel Corp.’s demonstration of next unit of computing (NUC) ultra-small form-factor concept made a splash earlier this year and caused a huge disappointment as Intel only positioned NUC for embedded, digital signage and similar applications. However, Intel has quietly introduced two NUC barebones for consumers and will start to sell them next month in retail. Unfortunately, ultra-small, yet powerful PCs will be quite expensive.

Initially, Intel will offer two NUC barebone kits: DC3217IYE and DC3217BY, both based on dual-core Core i3-3217U microprocessor (1.8GHz, 3MB cache, 17W TDP) with Intel HD Graphics 4000 graphics core and QS77 core-logic. Both barebones have can be equipped with two DDR3 SO-DIMMs, an mSATA solid-state drive and a mini PCIe Wi-Fi/Bluetooth module. The DC3217IYE features two HDMI outputs and 1Gb Ethernet; whereas DC3217BY has one HDMI output, one Thunderbolt port, but lacks 1Gb Ethernet. Interestingly, none of the models has any analogue audio output ports (the company proposes to use DP or TB ports for audio output), which points to the fact that either Intel wants to get rid of analogue completely, or that the NUC is intended for special-purpose use.

Both systems are just 4.59”×4.41”×1.55” (116.6mm*112mm*39mm) in size, yet feature decent microprocessor with advanced graphics core along with multi-monitor capability. All-in-all, technology-wise, Intel NUCs are nothing, but impressive.

 

Unfortunately, Intel next unit of computing will be pretty expensive. Some sources familiar with the company’s plans indicate that the initial price of the Intel DC3217BY barebone kit (CPU+mainboard+chassis+65W PSU) will cost from $300 to $330 in the U.S. (AnandTech web-site reports about $300 - $320 price-range), some others point to price-points north from $350. The cost of 8GB DDR3 memory, 120GB mSATA SSD, and a Wi-Fi PCIe card adds another $160 or more. As a result, one fully-configured Intel NUC will cost $460 - $500, the price of a laptop that offers comparable performance and feature-set, but which is also equipped with a display, keyboard, optical disk drive and so on.

While Intel NUC ultra-small form-factor systems look neat and should provide impressive performance and features, they are now just too expensive to become mass solutions. In fact, just like Intel said, NUCs will initially become viable solutions for specialized applications, where costs do not matter a lot. For typical end-users, NUCs hardly provide the right price/performance ratio.

Tags: Intel, NUC, Ivy Bridge, Core

Discussion

Comments currently: 25
Discussion started: 11/09/12 04:28:50 PM
Latest comment: 11/13/12 10:21:21 PM
Expand all threads | Collapse all threads

[1-7]

1. 
As Anand mentioned, what's important right now is that it is out there. Right now Intel needs to focus on the fact not only is it something they will support, but also set a baseline of what the expected feature set should be.

Currently, it doesn't look half bad. It will be interesting what the platform will be capable of not only as the chip set, many i/o functions, and other features currently off-die become part of the cpu, but also as low-power chips evolve. Sure, You may not game on an HD4000 or maybe even Haswell GT3...it may be relegated to more simple video tasks and serve a very specific user base, but as we move forward, we're not far away (meaning probably 5-6 years from the power of upcoming consoles) from some form of a system like this being enough to have a full-fledged experience...gaming and all. It's kind of ridiculous when you think about it...how far we've come.

If the form-factor becomes an accepted standard, odds are prices will fall substantially as all the pieces, most which are more-or-less specialized currently and expensive for what they are, become more of a commodity. When that day comes, it won't be a silly little trinket you compare to a laptop without a screen, it will be a customized and upgradable media center/port and device hub the size of a phone you can take with you wherever you go and use with any random display device along the way...and that is just freaking rad.
3 2 [Posted by: turtle  | Date: 11/09/12 04:28:50 PM]
Reply

2. 
" turtle: Right now Intel needs to focus on the fact not only is it something they will support, but also set a baseline of what the expected feature set should be. "

agreed, what is it with Intel anyway, we have known for years their Thunderbolt port IP came with 4 possible ports internally and that its in effect talking to the next device in the FB chain over a virtual PCIe bus and yet we still only get a poultry One Thunderbolt port on PC's years later, you need 2 at the very least so you can daisy chain the devices together.

its like they don't want you to actually buy it and so make Thunderbolt a real world standard.

and why have they still not written the generic Ethernet over Thunderbolt drivers given for instance that their other interesting card is the Knight's Corner Co-Processor that uses vanilla TCP/IP over PCIe internally to/from all its cores.

so put these NC's in your Thunderbolt equipped desktop and talk to the other daisy chained Thunderbolt devices like this with Ethernet over Thunderbolt drivers , its so obvious , why dont Intel write the damn thing and mandate 2 daisy chained Thunderbolt ports per Intel device peripheral from now on,you cant make a standard by price gouging to start with (that comes later like apple) you know Intel should be doing a "special buy one Thunderbolt part get one free type of offer to the OEM's"

http://semiaccurate.com/2...rchitecture-at-long-last/
"KC has an OS running on it, in this case it is a single Linux image per chip. Larrabee and Knights Ferry, the architecture that preceded KC, used a BSD based OS. The OS runs can be SSH’d to, and can run code just like a standalone CPU, but most users won’t use it in that way. That OS is presented with an interface that looks like a standard TCP/IP software stack, but there is no reason why it could not be hardware based in the future. If you have multiple KC cards in a server, they tunnel vanilla TCP/IP over PCIe.

In case it isn’t obvious, the standard way to do clustering on a rack of PC servers is with MPI, and that uses TCP/IP to talk between nodes."

ohh i see down voted as its too complicated to comprehend perhaps , these NUC are billed as special purpose and price, so Thunderbolt port x2 + generic Ethernet over Thunderbolt drivers = daisy chained generic (up to)10 gigabit TCP/IP Ethernet speed between all Thunderbolt connected (NUC and DT)PC's

if TCP/IP Ethernet over USB with the existing generic Linux USB-eth interface was good enough for the old mobile iPAQ back in the day http://en.wikipedia.org/w...SB_as_an_Ethernet_network

then why would you not want and demand a free windows/Linux driver that gives you up to 10 gigabit TCP/IP Ethernet speed between all Thunderbolt connected PC's today given the current price gouging of all Thunderbolt kit today
2 3 [Posted by: sanity  | Date: 11/09/12 11:14:31 PM]
Reply
- collapse thread

 
Why don't you read this piece from an Intel leaning website called Semiaccurate. Thunderbolts are.... no go.

http://semiaccurate.com/2...pcie-devices-at-computex/

And then for good measure, you might want to digest what AMD are proposing in order to counter Intel's Thunderbolt technology. It's called Lightning Bolt.

http://semiaccurate.com/2...ic-of-amds-lighting-bolt/
1 1 [Posted by: linuxlowdown  | Date: 11/13/12 06:06:16 AM]
Reply
 
im not sure why someone voted you down ,after all you did provide links so i up voted you again for being helpful.

OC you seem to miss the point but thats ok , that being the SOHO/Home users have had to put up with crap single non advancing 1GE/s for many many YEARS...
as the old ethernet vendors just sit back and suck at the teat of High margin 10GE/40GE/100GE without given the SOHO masses far better local LAN/WAN/SAN TCP/IP data throughput , that needs to change ASAP.

OC i also prefered the older 50GB+/s photonics version rather than the slow 15 foot copper 10Gb/s version ,intel has been researching silicon photonics for some time http://download.intel.com...smrelease_vPro_materials1

i dont really care who brings it (cheap nano photonics/10GE+/s) only that they do and have the cash to push it and its related routers etc at a reasonable mass consumer price ASAP IBM, Hewlett-Packard and others Including ARM vendors.

for instance IBM has been exploring its use for connecting transistors on chips, rather than just between larger devices.

0 0 [Posted by: sanity  | Date: 11/13/12 10:21:21 PM]
Reply

3. 
For the love of GOD, people WTF, is wrong with you PPL?

when amd does that you say ahh lame CPU with mediocre GPU, although we all know that AMD GPU beats the **** out of INTEL HD 4000, and dual cores clocked at 1.8GHz, is an ATOM disguised as i3 to suit your pleasure.

you can do the same with AMD it will cheaper, better, and can be silent:

http://www.pcper.com/news...Silent-PC-AMD-Trinity-APU

Also AMD E-450 motherboard + APU costs 110USD and i personally handled that APU, and let me tell you something it can game and perform great, just stupid Intel marketing for stupid people.
3 4 [Posted by: medo  | Date: 11/10/12 02:32:57 AM]
Reply
- collapse thread

 
when amd does that you say ahh lame CPU with mediocre GPU, although we all know that AMD GPU beats the **** out of INTEL HD 4000, and dual cores clocked at 1.8GHz, is an ATOM disguised as i3 to suit your pleasure.


Just because it is clocked at 1.8GHz doesn't mean that it is "an ATOM disguised as i3"

I think most people here know the fact that ivy bridge clock for clock rapes Atom.

Also AMD E-450 motherboard + APU costs 110USD and i personally handled that APU, and let me tell you something it can game and perform great, just stupid Intel marketing for stupid people.


With respect, the only stupid one here is you

Core i3-3217U is better than E-450 in every way possible (including GPU performance)

I'm not even sure if you are serious
4 3 [Posted by: maroon1  | Date: 11/10/12 09:19:31 AM]
Reply
 
show the post
1 5 [Posted by: TA152H  | Date: 11/10/12 12:16:07 PM]
Reply
 
The E-450 is deprecated. It's been usurped by the E2-1800, which is much cheaper to make, and does have the more powerful GPU.


E-450 and E2-1800 are based on same exact architecture. Only difference is E2-1800 run at slightly high clock speeds

E2-1800 uses a rebranded GPU with slightly higher freq than E-450

http://www.anandtech.com/...ls-brazos-20-apus-and-fch

Have you forgotten the i3-3217u runs the GPU at 350 MHz? Intel had to do that to make the power requirements in line, but doing so made it inferior even to the Bobcat in GPU performance.


Keep in mind that GPU turbo clock of 17 watt Ivy Bridge is close to regular models. For example i3 i3-3217U can turbo up to 1050MHz

Here you can see GPU benchmark for intel ultrabook that uses 17 watt ivy bridge which has HD4000 clocked @ 350MHz (with GPU turbo)
http://www.anandtech.com/...ch-and-ultrabook-review/5

It is not far behind 35w A8-3500 in many cases

Bocat (which is less than half performance of A8-3500) can never match 17w ivy bridge in GPU. Period

Overall though, in most applications it's going to be better, but it's also a LOT more expensive.


You must be kidding. i3-3217U should destroy bocat in almost everything, and in some cases you should expect more than twice the performance (mostly in CPU intensive workloads)

Bocat is meant to compete with Atom, not with ivy bridge.
5 2 [Posted by: maroon1  | Date: 11/10/12 01:45:27 PM]
Reply
 
show the post
1 7 [Posted by: TA152H  | Date: 11/10/12 09:28:48 PM]
Reply
 
Yet the E2-1800 is better at everything than the E-450, which is obsolete. GPU performance is considerably faster and turbos higher. It's interesting that you chose an obsolete part, even though the newer part is available.


E2-1800 CPU: 1.7GHz GPU: 523MHz (680MHz turbo)
E-450 CPU: 1.65GHz GPU: 508Mhz (600MHz turbo)

There is barely any difference between the two


Ivy Bridge can't turbo up to 1050 MHz with 17 watts doing much of anything. The GPU gets destroyed in the graphs you chose, and that's just how it is. If it's using the CPU, it can't do much with the GPU. It's not meant to run a 17 watts and is seriously compromised. The Bobcat has a design that was meant to run at 18 watts, and can actually use it to benefit.


The graph I chose was comparison between 17w IB to 35w A8 3500M (which uses 400 shader GPU)

E-450 & E-2 1800 has less than half the GPU power of A8

Unfortunately, I can't find any review that compares 17 watt ivy bridge to Bocat

However, there is a review that compares Pentium (sandy bridge) to E-450
http://uk.hardware.info/r...rks-integrated-gpu-3dmark

Believe it or not the GPU in SB based pentium gets higher score than E-450

Those pentium uses 6 EU GPU (equivalent to HD2000). In other words, even a low clocked HD4000 should should smoke these pentiums

4 1 [Posted by: maroon1  | Date: 11/11/12 06:14:36 AM]
Reply
 
Also VIA can to compete with Intel NUC or AMD Bobat (40nm lithography) / Jaguar (28nm lithography)
Etc. ZOTAC ZBOX nano VD01 with VIA Nano X2 processor Length: 5in - 127mm / Width: 5in - 127mm / Depth: 1.77in - 45mm
source: http://www.zotac.com/inde...temid=100302&lang=ap

And if ZOTAC „tomorrow“ upgrade ZBOX nano VD01 to VIA QuadCore (40nm lithography) + new all-in-one chipset VIA VX11H MSP (40nm lithography) with DirectX 11 VIA Chrome 640/645 GPU will be with performance in the middle between Brazos and NUC.

And in 2013 VIA update VIA Isaiah (CN) x86-64 microarchitecture to monolitic refresh VIA QuadCore CN-R (28nm lithography) on a single-chip with new 2MB L3 cache and SIMD up to AVX2 !
source: http://www.h-online.com/n...and-patents-1742927.html
3 0 [Posted by: Tralalak  | Date: 11/11/12 08:46:10 AM]
Reply
 
while its cool VIA are still fighting in the windows markets, its a shame even with this new VIA QuadCore that their OSS linux GFX performance is very lacking.

still they may be getting a good infusion of cash soon as they are suing Apple for using their long standing patents, and theres one thing VIA have is good patents.

so perhaps VIA if they have any sense will use some new cash to open these new VIA quad and improve their gfx and provide good OSS drivers.

so the new lower power x86 OSS devs could help them once again become usable in this new co-operative mobile linux market that seems to be rising up to take a lot of the lower/middle x86 market share away today, we shall see.
1 0 [Posted by: sanity  | Date: 11/12/12 04:16:36 PM]
Reply
 
"medo: you can do the same with AMD"

really, so where is the AMD 10 gigabit Thunderbolt interconnect ? equivalent , where is the AMD OSS hardware assisted video Encode, where is the AMD OSS openCL compute Linux library's.

games if you want games then buy a console or a mobile ARM quad core/quad gfx with ARM's new Midgard architecture Mali-T6xx inside http://www.anandtech.com/...o-10x-faster-than-mali400
"ARM's licensed CPU cores dominate the mobile space. "

as AMD Linux gaming is crap and intel is not much better there at the moment but at least Intel have many developers contributing to OSS linux gfx and other areas both desktop and mobile such as "ETC2 the new royalty-free alternative to S3TC texture compression and others and is part of what's needed for OpenGL ES 3.0 compliance. With Intel wanting GLES 3.0 for the next Mesa release in early 2013" http://lists.freedesktop....2012-November/029853.html

AMD just sack their most competent long time Linux developers working on future CPU product enablement, compiler optimizations, enhancing Linux virtualization support, and other areas just as their balance the inventory books management take up the ARM Cortex SOC that runs all the Linux derived OS's running on all ARM CPU's never mind the new cortex dual/quad derivatives , brilliant so who is the stupid people investing in AMD products now medo

so ill put my money on Intel for real time x264 highest quality HD encoding,and hardware assisted video Encode/decode, and ARM Quad for probably everything else at home in 2013, its the only valid option today if time matters to you
2 1 [Posted by: sanity  | Date: 11/10/12 04:40:46 PM]
Reply
 
If you had any idea of quality H264 encoding you would not use Intel and about ARM will have to wait and see.Sadly, you are right about Linux developers team. A very bad and strange move done by AMD suicidal management.
1 1 [Posted by: mosu  | Date: 11/11/12 10:34:06 AM]
Reply
 
( ^ and no, it was not me that down voted you, as i prefer to give you more facts to decide for yourself instead)

i said "X" 264 FOR quality as in the OSS software multi threaded assembly encoder that always beats AMD encode times given the same input line when run on i5/i7

(put simply: that means Intel's single generic i5/i7 integer SIMD is faster than AMD's newest x2 dedicated integer units inside the most modern AMD CPU's when used properly in multi threaded apps)

not "H" as in the lower quality hardware assisted block inside the Intel CPU as standard, that Intel HW assisted block does have its uses for quick better than real time encoding for non quality encodes OC as AMD don't provide any HW assisted Encode (even crap visual quality).

and NO, windows only "ATI Avivo™" (with super crap almost unusable quality) is not a real full HW assisted Encode, its software running on the CPU and GPU as proven when it first came available and was locked to AND GPU's, someone released a patched version of that that ran better on Intel+NV GPU's with no UVD inside. AMD PR really did a No. on that UVD to sell their kit

the fact that Intel have released their HW encode/decode documentation also means any OSS dev on any x86 OS can use that to write new SW to HW encode/decode with intel, or even patch x264 parts in the future if they care to.

you cant do the same for even video decode on AMD as they don't even/have never released their docs for the crap " UVD: a dedicated video decode processing unit introduced with ATI Radeon™ HD 2000 series" even though thats been there a lot longer.
1 1 [Posted by: sanity  | Date: 11/12/12 01:31:57 PM]
Reply

4. 
No USB 3.0 support on this? For that much you'd expect USB 3.0 support.
3 2 [Posted by: SteelCity1981  | Date: 11/10/12 02:12:30 PM]
Reply
- collapse thread

 
if you want and are happy with up to 9 hundred and 99 USB ports software polling your CPU every second all the time then its clear that AMD CPU's and motherboards are for you as the AMD motherboard OEM's cant seem to get enough USB on there to differentiate themselves ,after all it doesn't really matter much any more taking even more vital CPU cycles away from the limited AMD CPU's just to poll USB

but it would be fine for Intel to put a $1 4 port USB3 On there especially at this price as you say today to compliment the hardware DMA polled TB and give their users more options.
1 3 [Posted by: sanity  | Date: 11/10/12 07:34:55 PM]
Reply

5. 
I don't understand the use-case for this. Selling bare-bones systems is for people who tinker and can build their own, and people like that will know enough to want to build something more capable.

I can see selling something like this all wrapped up with software in a scenario where you take it home, open the box and plug it in.
1 1 [Posted by: KeyBoardG  | Date: 11/10/12 07:39:40 PM]
Reply
- collapse thread

 
agreed, but these versions are clearly limited in their current form for use in 2013.

they can only sit on the end of a Thunderbolt chain for instance like this, and does anyone seriously buy dual core x86 for 2012/13 general use today, i cant even remember the last dual i got its that long ago now

if it had a Quad i5, 2 1GE ports, and 2 Thunderbolt (and lots of sata ,i could have a cheap nice case 3D printed up for it for instance http://www.webpronews.com...with-a-3d-printer-2012-10 : ) then id say that version bare-bones would fly off the shelves this holiday season as you could make lots of interesting kit in a very short time.

perhaps then the linux OSS devs might even have a reason to re-factor the existing Linux USB-eth code to use Thunderbolt as a virtual 10 Gigabit TCP/IP Ethernet speed daisy chained connection

as it seems the commercial OEM's are not interested putting in a little effort writing and selling generic Ethernet over Thunderbolt drivers with a generic Thunderbolt cable
2 3 [Posted by: sanity  | Date: 11/10/12 08:39:10 PM]
Reply

6. 
Even if I would like to have computers that small I am completely unable to imagine a case where these Intel NUCs could be useful. You either do want to connect these computers to a display, or you don't. In the first case you need either double-link DVI or DisplayPort to connect a decent monitor. HDMI is limited to 1080 vertical lines, so it is usable only by brainwashed consumers who never read anything lengthy on their computers, so that they believe that the television screens, which have replaced during the last 5-6 years the earlier real computer monitor screens, are good enough as computer displays. For someone who only watches movies or plays games, HDMI would be good enough, but then audio would be needed. In the second case, when you do not use a display, you normally would need at least 2 Gigabit Ethernet ports. So these Intel barebones are not usable in either case.
0 2 [Posted by: Adrian  | Date: 11/11/12 11:37:58 AM]
Reply
- collapse thread

 
INTEL just need to add a few miniPCI slots for user who needs more network connection or audio can get those as miniPCI card.
0 1 [Posted by: idonotknow  | Date: 11/12/12 04:44:46 AM]
Reply

7. 
a new form factor is in the work. Intel designed one should it be call NIC for next in tel computing. AMD might as well get one up soon or that nic/nuc will not work with amd APU. amd needs one call NAC for next amd computing with it APU just make sure it has double the cores of nic/nuc the consumer will go for more cores just like the current desktop version. knowing amd 8 cores is slow as a turtle but to get my virtualbox server up i will have to get that 8 cores 8350 in the next few months after i get enough $ for it.
0 2 [Posted by: idonotknow  | Date: 11/12/12 04:37:18 AM]
Reply

[1-7]

Add your Comment




Related news

Latest News

Thursday, August 28, 2014

12:22 pm | AMD Has No Plans to Reconsider Recommended Prices of Radeon R9 Graphics Cards. AMD Will Not Lower Recommended Prices of Radeon R9 Graphics Solutions

Wednesday, August 27, 2014

9:09 pm | Samsung Begins to Produce 2.13GHz 64GB DDR4 Memory Modules. Samsung Uses TSV DRAMs for 64GB DDR4 RDIMMs

Tuesday, August 26, 2014

6:41 pm | AMD Quietly Reveals Third Iteration of GCN Architecture with Tonga GPU. AMD Unleashes Radeon R9 285 Graphics Cards, Tonga GPU, GCN 1.2 Architecture

Monday, August 25, 2014

6:05 pm | Chinese Inspur to Sell Mission-Critical Servers with AMD Software, Power 8 Processors. IBM to Enter Chinese Big Data Market with the Help from Inspur

Sunday, August 24, 2014

6:12 pm | Former X-Bit Labs Editor Aims to Wed Tabletop Games with Mobile Platforms. Game Master Wants to Become a New World of Warcraft