Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In with OpenID
Advertise on LowEndTalk.com

In this Discussion

Your Intel x86 CPU is Deeply Flawed (Meltdown/Spectre)

Your Intel x86 CPU is Deeply Flawed (Meltdown/Spectre)

raindog308raindog308 Moderator
edited January 6 in General

Thanks to @Infinity for sharing this...

https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

"It is understood the bug is present in modern Intel processors produced in the past decade. It allows normal user programs – from database applications to JavaScript in web browsers – to discern to some extent the contents of protected kernel memory.

"The fix is to separate the kernel's memory completely from user processes using what's called Kernel Page Table Isolation, or KPTI.

"The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel's overhead, and slows down the computer. Your Intel-powered machine will run slower as a result."

tl;dr you're going to get patched and will be trading up to 30% of your CPU performance in exchange for protection from a security flaw.

Not saying that's not the right choice, but I see rebellion and forks coming...you know, the "speed is critical, we won't upgrade past Linux 4.14..." crowd, or the "we're building a mining rig, so we want to use Dark Chester's non-isolation patches" tutorial people.

@WSS I think this is the equivalent of the introduction of the catalytic convertor. Shade tree coders?

EDIT: https://meltdownattack.com

My Advice: VPS Advice

For LET support, please click here.

«13456711

Comments

  • mkshmksh Member
    edited January 3
  • MikePTMikePT Member, Provider

    This is not good. Not when we spend hundreds of Euros for a damn CPU.

    MXroute.io - SMTP Relay Service, powered by MailChannels, fully automated, LET plans

    MXroute.com - Email Hosting, powered by MailChannels

  • SplitIceSplitIce Member, Provider
    edited January 3

    This will be murder on technologies like nfnetlink and similar that do frequent (packet per second like) switches between address space.

    10nm proving too hard? Just slow down your existing CPUs and sell fixed editions.

    X4B - DDoS Protection: EU & US affordable DDoS protection including Layer 7 mitigation.
    Latest Offer: 1TB and 2TB Anycast DDoS Protection (March Madness)
    Thanked by 1MineCloud
  • WSSWSS Member
    edited January 3

    @raindog308 Ironically, the cat does a lot of good on turbo cars, but they certainly don't help as much for the butt dyno as open headers.

    This, however, is just a huge "5eyez finally released information now that they're done with those backdoors.." (if you ask @Maounique) grade of fuckage. It's like reimplementing EMS page switching.

    @SplitIce said: This will be murder on technilogies like nfnetlink and similar that do frequent (packet per second like) switches between address space.

    10nm proving too hard? Just slow down your existing CPUs and sell fixed editions.

    Or, you know, use ASICs instead of gutter x86 hardware.

    I won't be back until @bsdguy is released.

  • raindog308 said: "The downside to this separation is that it is relatively expensive, time wise, to keep switching between two separate address spaces for every system call and for every interrupt from the hardware. These context switches do not happen instantly, and they force the processor to dump cached data and reload information from memory. This increases the kernel's overhead, and slows down the computer. Your Intel-powered machine will run slower as a result."

    ouch... wonder if there's any work being done to make context switching faster ?

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
  • WSSWSS Member
    edited January 3

    @eva2000 said: ouch... wonder if there's any work being done to make context switching faster ?

    ..because working around hardware bugs that can't be patched in CPU-level software is going to exponentially help if you NOP pad it enough? The fact that you bust caching for this is seriously going to limit hardware abilities based upon the few things they've built over the last decade. Shit hasn't been getting much faster - Mhz wise, but it sure has been getting more cores and cache. Now remove that from the equation.

    I won't be back until @bsdguy is released.

  • MikePTMikePT Member, Provider
    edited January 3

    It will bs interesting to see the performance impact and how clouds/vps providers will bear with it.

    MXroute.io - SMTP Relay Service, powered by MailChannels, fully automated, LET plans

    MXroute.com - Email Hosting, powered by MailChannels

  • Well, fuck..

    Thanked by 1flatland_spider
  • HxxxHxxx Member

    patch is probably optional. Nothing crucial here, next thread.

  • FranciscoFrancisco Top Provider

    @Hxxx said: patch is probably optional. Nothing crucial here, next thread.

    No, it's being merged into every public kernel. Maybe they'll add a boot time flag, no promises though.

    Francisco

    BuyVM - Dedicated KVM Slices / Anycast Support! / Stallion Control Panel / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • WSSWSS Member

    @Francisco said:

    @Hxxx said: patch is probably optional. Nothing crucial here, next thread.

    No, it's being merged into every public kernel. Maybe they'll add a boot time flag, no promises though.

    Francisco

    Get a kernel page!

    I won't be back until @bsdguy is released.

  • SplitIceSplitIce Member, Provider

    What are the implications for this on HN's in a cloud scenario?

    X4B - DDoS Protection: EU & US affordable DDoS protection including Layer 7 mitigation.
    Latest Offer: 1TB and 2TB Anycast DDoS Protection (March Madness)
  • mfsmfs Member

    Francisco said: Maybe they'll add a boot time flag

    both pti=off and nopti are mainlined and referenced in Torvald's kernel-parameters.txt

  • FranciscoFrancisco Top Provider

    @eva2000 said: from phoronix benchmarks etc of before/after kernel fixes

    I wonder whats going on with the 8700k's that there's that big of a drop?

    Francisco

    BuyVM - Dedicated KVM Slices / Anycast Support! / Stallion Control Panel / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • @Francisco said:

    @eva2000 said: from phoronix benchmarks etc of before/after kernel fixes

    I wonder whats going on with the 8700k's that there's that big of a drop?

    Francisco

    the i7 8700K system used Samsung 950 PRO NVMe SSD so could be related ?

    FS-Mark performance appears to be significantly slower with this latest Linux kernel Git code, at least when using faster storage as found with the Core i7 8700K setup. The i7-8700K system was using a Samsung 950 PRO NVMe SSD while the i7-6800K system was using a slower SATA 3.0 Toshiba TR150 SSD.

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
  • bsdguybsdguy Member

    xen this or that, kvm, etc - forget it, this animal is a fucking slayer in our field (networking). The problem is about syscalls, i.e. switching to/from ring 0 which fucks you the harder the more syscalls you make, that number typically being between fucking painfully and insanely high.

    Secondly, consider the fact that it's not firmware/microcode repairable which translates to hardwired "smart" shortcuts right in the silicon.

    The good news (if you like amd) is: For amd this may turn out to be just the perfect turbo because now - as well as for some more time (one doesn't change the innards of a complex design like intel cpus in a week or so. plus considerable parts of the production line will need to be adapted) - "just get an amd based system" is about the most sensible alternative.

    I guess this one is far worse than the floating point fuckup many years ago.

    My favourite prime number is 42. - \forall cpu in {intel, amd, arm}: cpu->speed -= cpu->speed/100 x irandom(15, 30) | state := hacked

    Thanked by 3netomx kmas ricardo
  • bsdguybsdguy Member

    @eva2000

    Those phoronix benchmarks are utterly worthless for most of us as they are game focussed whereas server loads are largely i/o bound. In fact, those tests are even worthless for normal desktop scenarios as gaming is among the least crippled scenarios (lots and lots of calculations, not a lot of i/o).

    My favourite prime number is 42. - \forall cpu in {intel, amd, arm}: cpu->speed -= cpu->speed/100 x irandom(15, 30) | state := hacked

  • WSSWSS Member

    I like the fact that this patch currently forces ALL Intel based CPUs to use PTI.

    I won't be back until @bsdguy is released.

    Thanked by 1qrwteyrutiyoup
  • jarlandjarland Provider

    Nerds!

    / downs more everclear

    Thanked by 1Aidan
  • bsdguy said: Those phoronix benchmarks are utterly worthless for most of us as they are game focussed whereas server loads are largely i/o bound. In fact, those tests are even worthless for normal desktop scenarios as gaming is among the least crippled scenarios (lots and lots of calculations, not a lot of i/o).

    believe more benchmarks are to come but yeah...

    * Centmin Mod Project (HTTP/2 support + ngx_pagespeed + Nginx Lua + Vhost Stats)
    * Centmin Mod LEMP Stack Quick Install Guide
  • jarlandjarland Provider

    Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Thanked by 3netomx Hxxx cassa
  • MikeAMikeA Member, Provider

    @jarland said: Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Did anyone test with Crysis???

    ExtraVM DDoS Protected VPS

  • AidanAidan Member

    MikeA said: Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Thanked by 1Wolveix
  • @Aidan said:

    MikeA said: Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Cyrix can.

    Dont'TalkAboutLetClub @WSS is an underwear thief!

    upto32.com Retro at it's best

  • jarlandjarland Provider

    @MikeA said:

    @jarland said: Gaming is the only benchmark that matters.

    Sincerely,
    15,000 people on Reddit probably

    Did anyone test with Crysis???

    It's not loading under nglide for some reason.

  • WSSWSS Member

    @AuroraZ said:

    @Aidan said:

    MikeA said: Did anyone test with Crysis???

    Nothing can run Crysis, no point in testing it.

    Cyrix can.

    CentaurHauls!

    I won't be back until @bsdguy is released.

  • sdglhmsdglhm Member

    Aidan said: Nothing can run Crysis, no point in testing it.

    There goes my hope of running crysis on this sweet new quantum build

    time wasters please dont comment as we are a serious buyer
    Programmer trying to do Logo Designs

  • perennateperennate Member, Provider
    edited January 3

    bsdguy said: xen this or that, kvm, etc - forget it, this animal is a fucking slayer in our field (networking). The problem is about syscalls, i.e. switching to/from ring 0 which fucks you the harder the more syscalls you make, that number typically being between fucking painfully and insanely high.

    Can't you simply boot your system with nopti option? The attack surface for a router or similar application seems pretty small, making avoiding the performance loss worth it.

    Edit: or if you're talking about VMs in general, the point is that Xen HVM guests might be unable to exploit the hardware vulnerabilities because of some feature of the hypervisor. In that case, the host can leave page table isolation disabled, right? Whether the guest remains vulnerable doesn't matter too much since the user can choose whether to boot with isolation or not.

  • bsdguybsdguy Member
    edited January 3

    @perennate

    There is the caveat of the whole thing not really being known; right now we're based on educated guesses based on credible hints. But I'll try to explain:

    When interacting with the kernel there is memory (think "buffers" as a typical example) involved, be it explicitly ("write this buffer to the disk") or implicitly (lots of process related info the kernel keeps and needs).

    The second part one needs to understand is separation. A normal program must not be able to access kernel memory which is privileged. Think about that and you see that that means that loads of data (typically buffers of some sort) need to be actually copied around which also brings considerable housekeeping work for the kernel.

    So it seemed attractive to implement that separation in a "smart" way, namely as pseudo-separation. Now, we get close to the hot spot ...

    The implementation of that mechanism is such that no "official" user program instruction can access kernel memory. But there are also "unofficial" instructions, namely those in a branch which get processed speculatively. The reason for that is that branches (of conditionals like an "if" instruction), or more precisely, wrong predictions, are very expensive, i.a. because they tend to invalidate cache lines.

    To be as fast as any possible modern processors try two strategies, a) they try to predict which branch will be taken, and b) they try to process the "losing" branch anyway (which is feasible as many instructions can be parallelized in a processor).

    And that's where the problem arises because the "watchdog" that guards memory access is looking only at the active branch and such the code in the inactive branch can access kernel memory during a tiny time window while it is a) the losing but b) the anyway pre-executed branch.

    Kindly note that, again, I do not know that as intel and the OS developers still keep the lid on it for now, but I assume that my explanation is a reasonable enough attempt to explain.

    Let's move on. What is that pre-execution in parallel and are really both branches processed? Answer: Kind of. Of, course, not all op codes can be and are executed in parallel. But some can be and are. Importantly, modern processors are optimized to do as much housekeeping as any possible. Think of it as a factory where one insanely expensive machine is the decisive resource; so obviously one will design the whole production process so as to utilize that central machine at 100% if any possible and to make sure that everything is ready and well prepared. Same in a processor and one of the main preparations (and a complex one at that considering all the cache and memory levels, dma, barriers/fences, etc) is to have all possibly required data readily available. Wrt branch prediction and pre execution this means to have, if any possible (and usually it is) all data for the losing branch available, too.

    Now, branch prediction is no magic. One can know (or at least guess with very good success rates) which branch the processor will consider the winning one at every given nanosecond. Hence one can create code with an "innocent" almost certainly winning branch and a "poisoned" almost certainly loosing branch (which is illegaly accessing kernel memory).

    Now comes the second part of the evil act. Preparing or loading data sound innocent and simple - but isn't. It's, in fact, a very complex machinery. And considering that a ram access is about 100 times or so slower than a cache access that machinery is quite optimized towards not easily throwing away data in the cache - which is exactly what we want as evil guys because it means that a split micro second later some evil code will have those kernel data available and the good thing (for the evil guy) is that those data are considered pre-checked and legal (because a cpu won't waste cycles on loading forbidden memory).

    Again, once more, I might be wrong in some parts as we all can't but make more or less educated guesses right now! Somehow, however, two things must happen, namely the loading of illegal (kernel) memory and, also important, the "cleaning" so as to allow a user process to access that memory. I'm also quite confident that I'm not too far off because there is yet another condition to be met which strongly limits the possibilities, namely the fact that the fuckup can't be repaired or at least somewhat mitigated in firmware.

    So, xen this or that don't play a role because the "virtual processor" is just a slightly disguised incarnation of a real one. Moreover, virtualization is happening on a layer way above what we're talking about here.

    Sorry for the long post but the guts of processors aren't that easy to explain in a few words in a reasonably understandable way.

    P.S. Based on what we know so far I assume that there is bit of half way good news, too: I guess that chances aren't bad that the fuckup allows read only access.

    My favourite prime number is 42. - \forall cpu in {intel, amd, arm}: cpu->speed -= cpu->speed/100 x irandom(15, 30) | state := hacked

  • jarlandjarland Provider

    @Neoon said: AMD! no problem.

    "We were prepared for this and now sell the top performing VZ containers." - Probably some Virtuozzo host that put AMD in their nodes purely because they thought "more cores = more containers per 1U rack space" (aka every EIG brand).

    Thanked by 3WSS netomx shovenose
  • MaouniqueMaounique Member
    edited January 3

    bsdguy said: P.S. Based on what we know so far I assume that there is bit of half way good news, too: I guess that chances aren't bad that the fuckup allows read only access.

    I think that is enough to read keys and stuff, after all, on an encrypted system those are needed all the time, however, it is like the cloudflare (?) bug which allowed for a while unencrypted blobs to be read more or less at random.
    If that is the situation (and I doubt as much as you do without the actual proven facts and at least some kind of PoC), there will be highly unlikely a VM will be able to exploit this without hogging the CPU insanely beyond any reasonable threshold for intervention, and we talk containers here, not to mention things like Xen which play much more nicely with credits and sharing. The "poisonous" code you mention will have a very tiny window of opportunity to read AND save (which involves a lot heavier operations than reading the cache) somewhere the contents which will be more or less garbage most of the time, even when the "innocent" code will be specifically crafted to request operation involving sensitive data.
    I am also inclined to believe that, while this is absolutely possible in containers, in Xen or KVM and full visualizations with fake hardware and all, a) it will be much more expensive for the attacker (will hog more CPU to get any kind of valuable data), and, b), some ways of handling the data in the rings HAS TO hinder, at least, if not render moot altogether, this scenario, albeit nothing is certain right now.
    I do think at least Xen HVM, VMWare, virtualbox and the like will have serious "natural defenses", with KVM somewhat better protected than Xen-PV, but still lower than the former in the list. The containers will be the most exposed, though, coupled with the insane number of threads and switching, it will be probably the death knell for OVZ 6 on anything else than AMD CPUs UNLESS the devs manage to find another trick in their bursting bag which will make this kind of attack impossible at container level..

    Extremist conservative user, I wish to preserve human and civil rights, free speech, freedom of the press and worship, rule of law, democracy, peace and prosperity, social mobility, etc. Now you can draw your guns.

    Thanked by 2NodePing Shazan
  • @WSS said: I like the fact that this patch currently forces ALL Intel based CPUs to use PTI.

    it currently forces ALL x86 cpus, but fear not, AMD comes to the rescue: http://lkml.iu.edu/hypermail/linux/kernel/1712.3/00675.html

    Thanked by 1rm_
  • bsdguybsdguy Member

    @Maounique

    I'm less optimistic. Virtualization serves a purpose and most players (who also happen to define the game) won't waste resources, which translates to virtualization usually not implementing virtual processors (except for the relatively rare cases where other architectures are emulated). Looking at the very few processors, for instance, that actually have more or less good hardware support for x86 emulation provides a good base for a reasonable guess; typically the price to pay is in the 10 - 50% range (e.g. loongson, elbrus cpus). In other words: unattractive and not used but in relatively rare cases where it's absolutely needed.

    What is really virtualized is typically i/o related. Moreover the virtualization boundary can be and has been crossed; while not yet something every John or Dick can do, there are vectors available. So, my take on that is that virtualization is to be considered as an additional barrier but by no means an unsurmountable one. That is even more true for the kind of people who are able to (ab)use the fuckup we discuss here; one needs considerable knowledge to make use of that.

    Btw (and partly coming back to the virtualization assumption): Let's not forget the driving force behind the current fuckup which is a) cutting corners and tricking, and b) the desire for speed at almost any cost. To put it funny: the current fuckup in a way is a case of virtualization, namely the virtualization of separation.

    As for the need to be very quick (to abuse) and "insane number of threads and switching" I don't agree. For one, we are talking about something in the range of tens of clock cycles (insanely fast) that is way below the switching granularity. Moreover, on the level of threads and other context switching the act has already been completed. When switching the OS just sees normal memory.

    To clarify: the evil code is in the loosing branch of a conditional and we are talking about mere nanoseconds (if the winning branch is properly constructed which seems reasonable to assume). Next innocent looking code grabs the illegaly loaded into the cache memory, performs a cheap operation (say, xor'ing with a known value) and writes the result - which is the illegally gained and only pro forma "changed" data - back to its own memory. A sequence that happens a zillion times every second and that is absolutely normal (except for one tiny detail). All in all we're talking about something that is about 1/1000 of a thread slice.

    And again, everything - except for a small detail that happens within the processor - is perfectly normal and in no way conspicious. I'd be surprised if a hypervisor even had the slightest chance to note what happened; all it sees is perfectly normal code, nothing strange at all.

    Also keep in mind that if things happened on a much higher level like virtualization intel could simply change the firmware. think about that! Obviously it's not even something reachable by microcode! What chance would some hypervisor have? Also keep in mind that we wouldn't talk about a performance loss in the range of 15% - 30% if virtualization were a remedy; nope, ten we'd talk about single digit performance loss.

    If you want hope then, that's my take, you should bet on access size. Cache lines are damn small and from what I see that's what we talk about. So an attacker will gain just small pieces of kernel memory. Granted, that's deadly enough if well targeted but here I'll stop because things get too speculative. We'll have to wait and to see the kernel changes because those are made by the very few who actually really know the problem.

    My favourite prime number is 42. - \forall cpu in {intel, amd, arm}: cpu->speed -= cpu->speed/100 x irandom(15, 30) | state := hacked

  • MaouniqueMaounique Member
    edited January 3

    My take is that, as you say, the data is small and fragmented, then it has to be moved out somehow, which is not even remotely close to 1/1000 of a thread as a read and xor, this will have to be run so many times to be able to catch something "important" and even more to make up a pattern of anything recognizable, it will hog the cpu too much and too often, the virtualization, while not insurmontable as you say, it adds to an already convoluted and stochastic access, further complicating a complicated situation.
    Will it absolutely protect the sensitive data? Probably not. Will it push such an operation beyond the reasonable threshold where some useful data can be mined, making it not worthy, practically impossible? Could be, at least in some cases which may be designed by chance in such a way, or maybe not.

    A microcode update should be able to mitigate this, if anything, could make the losing path always non-executable. It will not trigger such a big penalty.
    However, we discuss theory here, while could be entertaining for other people, until we have a PoC to dissect, it remains just that, theory, we can, at most, theorize what could not be, rather than what it is.

    Extremist conservative user, I wish to preserve human and civil rights, free speech, freedom of the press and worship, rule of law, democracy, peace and prosperity, social mobility, etc. Now you can draw your guns.

  • bsdguybsdguy Member
    edited January 3

    @Maounique

    If a remedy by microcode were an option it would be out by now or at least publicly announced. Also there wouldn't be a need for the OS people to work on a solution, which they do. Also, btw, I'd be surprised if memory and cache handling were microcoded; I don't think so, for that critical part one bets on hardware.

    "Hogging up" - no. Keep in mind that that code doesn't look different from what runs on machines every second.

    Where (I guess) you are right is that it wouldn't be exactly practical to get at exactly the right, say, 64 bytes (a cache line) out of megabytes of kernel. On the other hand - and quite probably not unrelated - ASLR has been shown to promise a lot more than it can actually offer. Unfortunately some kind of ASLR, properly done, of course, would be one of the primary candidates OS people would look at for a remedy. Which is absolutely not good news and might well explain the silence and the keep-the-lid-on-it attitude of the involved parties.

    Also keep in mind that the currently considered remedy approaches (the ones that create 15 - 30% performance loss) mean i.a. that much of the kernel data would need to be changed also structurally (which introduces new risks and problems).

    My guess is that the OS people have 2 teams at work. One that tries to find more elegant and less performance drowning remedies (which they must find anyway sooner or later) and the other one implementing the slow and cumbersome "first-aid" remedy, just to be sure.

    My favourite prime number is 42. - \forall cpu in {intel, amd, arm}: cpu->speed -= cpu->speed/100 x irandom(15, 30) | state := hacked

    Thanked by 2simlev adly
  • FranciscoFrancisco Top Provider

    @jarland said:

    @Neoon said: AMD! no problem.

    "We were prepared for this and now sell the top performing VZ containers." - Probably some Virtuozzo host that put AMD in their nodes purely because they thought "more cores = more containers per 1U rack space" (aka every EIG brand).

    "Hetzner had them on for cheap".

    Still waiting to hear if there's KVM break outs or not. The AWS HVM thing was interesting but it's always possible they live migrated the users if they don't use local storage.

    Francisco

    BuyVM - Dedicated KVM Slices / Anycast Support! / Stallion Control Panel / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
  • Intel will fly a firmware upgrade faster than AMD will wake up and use this failure for it's own gain.

    If you stare into the deadpool, the deadpool stares back at you.

  • NeoonNeoon Member
    edited January 3
  • adlyadly Member

    @LTniger said: Intel will fly a firmware upgrade faster than AMD will wake up and use this failure for it's own gain.

    If it could be fixed with a firmware/microcode update I doubt Intel would have let it get this far. It is curious that AMD doesn’t seem to be really pushing to get its patch mainlined.

    Adam

  • jackbjackb Member, Provider
    edited January 3

    @adly said:

    @LTniger said: Intel will fly a firmware upgrade faster than AMD will wake up and use this failure for it's own gain.

    If it could be fixed with a firmware/microcode update I doubt Intel would have let it get this far. It is curious that AMD doesn’t seem to be really pushing to get its patch mainlined.

    I am paraphrasing here and do not have much knowledge at this low level, but I believe the reason AMD is not impacted is because of how they handle speculative execution compared to Intel - not a kernel fix that could get mainlined. AMD are exempted from this kernel patch because they aren't vulnerable in the first place.

    The AMD microarchitecture does not allow memory references, including >speculative references, that access higher privileged data when running >in a lesser privileged mode when that access would result in a page fault.

    Disable page table isolation by default on >AMD processors by not setting the X86_BUG_CPU_INSECURE feature, which >controls whether X86_FEATURE_PTI is set.

    Afterburst - Awesome OpenVZ&KVM VPS in US+EU

  • adlyadly Member

    @jackb I’m referring to the patch AMD submitted to prevent the new protections/PTI from being applied on their processors see. It’s to their benefit that they aren’t impacted by the performance slowdowns.

    Adam

  • randvegetarandvegeta Member, Provider
    edited January 3

    Well this is disturbing. So basically all us providers with Intel CPUs will either have this massive security flaw or a massive performance hit?

    ... bugger!....

    It's going to cost the industry as a whole millions! Or possibly billions! If the performance hit is 30%, then it's basically writing off 30% of the equipment's book value. To get back that lost performance would require a 43% increase in CPU capacity. Which would probably mean more of everything, since you can't just add more CPUs to existing servers like you can disk...

    So yeah, all us Intel based providers are going to see a 30% drop in (CPU) capacity....

    Shit!

  • mfsmfs Member
    edited January 3

    PS: It appears 64-bit ARM Linux kernels will also get a set of KAISER patches, completely splitting the kernel and user spaces, to block attempts to defeat KASLR.

    I missed this PS when I first read the link in OP

  • AnthonySmithAnthonySmith Administrator, Top Provider
    edited January 3

    So, unless some security workaround is found, all of a sudden it is 2011 again and everyone wants Xen PV again for performance, it is going to start costing $7 for 1GB ram again and within a year the ridiculous openVZ is better than Xen arguments will start again.

    fun.

    Thanked by 1Shazan
  • RodneyRodney Member

    Well Fsck

  • FranciscoFrancisco Top Provider

    AnthonySmith said: So, unless some security workaround is found, all of a sudden it is 2011 again and everyone wants Xen PV again for performance.

    No, PV has the patch enforced, which means the performance hit is there.

    Jury's out on if HVM/KVM is affected.

    Francisco

    BuyVM - Dedicated KVM Slices / Anycast Support! / Stallion Control Panel / Windows 2008, 2012, & 2016! / Unmetered Bandwidth!
    BuyShared - Shared & Reseller Hosting / cPanel + Softaculous + CloudLinux / Pure SSD! / Free Dedicated IP Address
Sign In or Register to comment.