Control Freak (itch) (Alch505) Mac OS

broken image


Find games for macOS like Friday Night Funkin', Cold Shot, Wrong Floor, Deepest Sword, Dying of Thirst on itch.io, the indie game hosting marketplace. Search the world's information, including webpages, images, videos and more. Ryden life mac os. Google has many special features to help you find exactly what you're looking for. Hi I recently installed Mac OS High Sierra as I was finally able to add an extra SATA SSD to my laptop. I already had an M.2 SSD but that one didn't like Sierra. I did a fresh High Sierra installation and everything went smooth, except for audio. Audio always worked on my 10.12.6 Sierra. Find games for macOS like Friday Night Funkin', Cold Shot, Wrong Floor, Deepest Sword, Dying of Thirst on itch.io, the indie game hosting marketplace.

How to Get Internet Explorer for Mac OS Nurdin Navodiya - March 27, 2021 0 Everyday Windows PC administrators switching to MacBook or iMac for portability and that's when Internet Explorer for Mac turns out to be an important.

Mark Gurman at Bloomberg is reporting that Apple will finally announce that the Mac is transitioning to ARM chips at next week's Worldwide Developer Conference (WWDC):

Apple Inc. is preparing to announce a shift to its own main processors in Mac computers, replacing chips from Intel Corp., as early as this month at its annual developer conference, according to people familiar with the plans. The company is holding WWDC the week of June 22. Unveiling the initiative, codenamed Kalamata, at the event would give outside developers time to adjust before new Macs roll out in 2021, the people said. Since the hardware transition is still months away, the timing of the announcement could change, they added, while asking not to be identified discussing private plans. The new processors will be based on the same technology used in Apple-designed iPhone and iPad chips. However, future Macs will still run the macOS operating system rather than the iOS software on mobile devices from the company.

I use the word 'finally' a bit cheekily: while it feels like this transition has been rumored forever, until a couple of years ago I felt pretty confident it was not going to happen. Oh sure, the logic of Apple using its remarkable iPhone chips in Macs was obvious, even back in 2017 or so:

  • Apple's A-series chips had been competitive on single-core performance with Intel's laptop chips for several years.
  • Intel, by integrating design and manufacturing, earned very large profit margins on its chips; Apple could leverage TSMC for manufacturing and keep that margin for itself and its customers.
  • Apple could, as they did with iOS, deeply integrate the operating system and the design of the chip itself to both maximize efficiency and performance and also bring new features and capabilities to market.

The problem, as I saw it, was why bother? Sure, the A-series was catching up on single-thread, but Intel was still far ahead on multi-core performance, and that was before you got to desktop machines where pure performance didn't need to be tempered by battery life concerns. More importantly, the cost of switching was significant; I wrote in early 2018:

  • First, Apple sold 260 million iOS devices over the last 12 months; that is a lot of devices over which to spread the fixed costs of a custom processor. During the same time period, meanwhile, the company only sold 19 million Macs; that's a much smaller base over which to spread such an investment.
  • Second, iOS was built on the ARM ISA from the beginning; once Apple began designing its own chips (instead of buying them off the shelf) there was absolutely nothing that changed from a developer perspective. That is not the case on the Mac: many applications would be fine with little more than a recompile, but high-performance applications written at lower levels of abstraction could need considerably more work (this is the challenge with emulation as well: the programs that are the most likely to need the most extensive rewrites are those that are least tolerant of the sort of performance slowdowns inherent in emulation).
  • Third, the PC market is in the midst of its long decline. Is it really worth all of the effort and upheaval to move to a new architecture for a product that is fading in importance? Intel may be expensive and may be slow, but it is surely good enough for a product that represents the past, not the future.

However, the takeaway from the Daily Update where I wrote that was that I was changing my mind: ARM Macs felt inevitable, because of changes at both Apple and Intel.

Apple and Intel

A year before that Daily Update, Apple held a rather remarkable event for five writers where the company seemed to admit it had neglected the Mac; from TechCrunch:

Does Apple care about the Mac anymore?

That question is basically the reason that we're here in this room. Though Apple says that it was doing its best to address the needs of pro users, it obviously felt that the way the pro community was reacting to its moves (or delays) was trending toward what it feels is a misconception about the future of the Mac.

Control Freak (itch) (Alch505) Mac OS

'The Mac has an important, long future at Apple, that Apple cares deeply about the Mac, we have every intention to keep going and investing in the Mac,' says Schiller in his most focused pitch about whether Apple cares about the Mac any more, especially in the face of the success of the iPhone and iPad.

'And if we've had a pause in upgrades and updates on that, we're sorry for that — what happened with the Mac Pro, and we're going to come out with something great to replace it. And that's our intention,' he says, in as clear a mea culpa as I can ever remember from Apple.

Yes, Schiller was talking about the Mac Pro, which is what the event was nominally about, but that wasn't the only Mac long in the teeth, and the ones that had been updated, particularly the laptops, were years into the butterfly keyboard catastrophe; meanwhile there was a steady-stream of new iPhones and iPads with new industrial designs and those incredible chips.

Those seemingly neglected Macs, meanwhile, were stuck with Intel, and Apple saw the Intel roadmap that has only recently become apparent to the world: it has been a map to nowhere. In 2015 Intel started shipping 14nm processors in volume from fabs in Oregon, Arizona, and Ireland; chip makers usually build fabs once per node size, seeking to amortize the tremendous expense over the entire generation, before building new fabs for new nodes. Three years later, though, Intel had to build more 14nm capacity after hiring Samsung to help it build chips; the problem is that its 10nm chips were delayed by years (the company just started shipping 10nm parts in volume this year).

Meanwhile, TSMC was racing ahead, with 7nm chips in 20171, and 5nm chip production starting this year; this, combined with Apple's chip design expertise, meant that as of last fall iPhone chips were comparable in speed to the top-of-the-line iMac chips. From Anandtech:

We've now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance.

Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there's really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we're not expecting any proper competition for at least another 2-3 years, and Apple isn't standing still either.

Last year I've noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. The inrr castle mac os. In SPECfp2006 the A13 is still roughly 15% behind.

The Intel Core i9-9900K Processor in those charts launched at price of $999 before settling in at a street price of around $520; it remains the top-of-the-line option for the iMac for an upgrade price of $500 above the Intel Core i5-8600K, a chip that launched at $420 and today costs $220. The A13, meanwhile, probably costs between $50~$60.2

This is what made next week's reported announcement feel inevitable: Apple's willingness to invest in the Mac seems to have truly turned around in 2017 — not only has the promised Mac Pro launched, but so has an entirely new MacBook line with a redesigned keyboard — even as the cost of sticking with Intel has become not simply about money but also performance.

The Implications of ARM

The most obvious implication of Apple's shift — again, assuming the reporting is accurate — is that ARM Macs will have superior performance to Intel Macs on both a per-watt basis and a per-dollar basis. That means that the next version of the MacBook Air, for example, could be cheaper even as it has better battery life and far better performance (the i3-1000NG4 Intel processor that is the cheapest option for the MacBook Air is not yet for public sale; it probably costs around $150, with far worse performance than the A13).

What remains to be seen is just how quickly Apple will push ARM into its higher-end computers. Again, the A13 is already competitive with some of Intel's best desktop chips, and the A13 is tuned for mobile; what sort of performance gains can Apple uncover by building for more generous thermal envelopes? It is not out of the question that Apple, within a year or two, has by far the best performing laptops and desktop computers on the market, just as they do in mobile.

This is where Apple's tight control of its entire stack can really shine: first, because Apple has always been less concerned with backwards compatibility than Microsoft, it has been able to shepherd its developers into a world where this sort of transition should be easier than it would be on, say, Windows; notably the company has over the last decade deprecated its Carbon API and ended 32-bit support with the current version of macOS. Even the developers that have the furthest to go are well down the road.

Second, because Apple makes its own devices, it can more quickly leverage its ability to design custom chips for macOS. Again, I'm not completely certain the economics justify this — perhaps Apple sticks with one chip family for both iOS and the Mac — but if it is going through the hassle of this change, why not go all the way (notably, one thing Apple does not need to give up is Windows support: Windows has run on ARM for the last decade, and I expect Boot Camp to continue, and for virtualization offerings to be available as well; whether this will be as useful as Intel-based virtualization remains to be seen).

What is the most interesting, and perhaps the most profound, is the potential impact on the server market, which is Intel's bread-and-butter. Linus Torvalds, the creator and maintainer of Linux, explained why he was skeptical about ARM on the server in 2019:

Control Freak (itch) (alch505) Mac Os Free

Some people think that 'the cloud' means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test 'at home' (and by 'at home' I don't mean literally in your home, but in your work environment). Which means that you'll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better…

Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit 'hyperscaling' model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place…

The only way that changes is if you end up saying 'look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on'. Actual hardware for developers is hugely important. I seriously claim that this is why the PC took over, and why everything else died…It's why x86 won. Do you really think the world has changed radically? Fantasmon 2 mac os.

ARM on Mac, particularly for developers, could be a radical change indeed that ends up transforming the server space. On the other hand, the shift to ARM could backfire on Apple: Windows, particularly given the ability to run a full-on Linux environment without virtualization, combined with Microsoft's developer-first approach, is an extremely attractive alternative that many developers just don't know about — but they may be very interested in learning more if that is the price of running x86 like their servers do.

Intel's Failure

What is notable about this unknown — will developer preferences for macOS lead to servers switching to ARM (which remember, is cheaper and likely more power efficient in servers as well), or will the existing x86 installation base drive developers to Windows/Linux — is that the outcome is out of Intel's control.

What started Intel's fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone; then-CEO Paul Otellini told Alexis Madrigal at The Atlantic what happened:3

'We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it,' Otellini told me in a two-hour conversation during his last month at Intel. 'The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do…At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.'

What is so disappointing about this excuse is that it runs directly counter to what made Intel great; in 1965, Bob Noyce, then at Fairchild Semiconductor4, shocked the semiconductor world by announcing that Fairchild would price its integrated circuit products at $1, despite the fact it cost Fairchild far more than that to produce them. What Noyce understood is that the integrated circuit market was destined to explode, and that by setting a low price Fairchild would not only accelerate that growth, but also drive down its costs far more quickly than it might have otherwise (chips, remember, are effectively zero marginal cost items; the primary costs are the capital costs of setting up manufacturing lines).

That is the exact logic that Otellini 'couldn't see', so blinded he was by the seemingly dominant PC paradigm and Intel's enviable profit margins.5 Worse, those volumes went to manufacturers like TSMC instead, providing the capital for research and development and capital investment that has propelled TSMC into the fabrication lead.

CORRECTION: A source suggested that this sentence was wrong:

What started Intel's fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone.

XScale, Intel's ARM chips, were engineered to be fast, not power-efficient, and Intel wasn't interested in changing their approach; this is particularly striking given that Intel had just recovered from having made the same mistake with the Pentium 4 generation of its x86 chips. Moreover, the source added, Intel wasn't interested in doing any sort of customization for Apple: their attitude was take-it-or-leave-it for, again, a chip that wasn't even optimized correctly. A better sentence would have read:

Intel's fall from king of the industry to observer of its fate was already in motion by 2005: despite the fact Intel had an ARM license for its XScale business, the company refused to focus on power efficiency and preferred to dictate designs to customers like Apple, contemplating their new iPhone, instead of trying to accommodate them (like TSMC).

What is notable is that doesn't change the sentiment: the root cause was Intel's insistence on integrating design and manufacturing, certain that their then-lead in the latter would leave customers no choice but to accept the former, and pay through the nose to boot. It was a view of the world that was, as I wrote, 'blinded…by the seemingly dominant PC paradigm and Intel's enviable profit margins.'

My apologies for the error, but also deep appreciation for the correction.

That is why, last month, it was TSMC that was the target of a federal government-led effort to build a new foundry in the U.S.; I explained in Chips and Geopolitics:

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only.

The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel's focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don't require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC.

This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC's foundries — and Samsung's — are within easy reach of Chinese missiles is a major issue.

I think the focus on TSMC was correct, and I am encouraged by TSMC's decision to build a foundry in Arizona, even if they are moving as slowly as they can on a relatively small design; at the same time, what a damning indictment of Intel. The company has not simply lost its manufacturing lead, and is not simply a helpless observer of a potentially devastating shift in developer mindshare from x86 to ARM, but also when its own country needed to subsidize the building of a foundry for national security reasons Intel wasn't even a realistic option, and a company from a territory claimed by China was.

(itch)

'The Mac has an important, long future at Apple, that Apple cares deeply about the Mac, we have every intention to keep going and investing in the Mac,' says Schiller in his most focused pitch about whether Apple cares about the Mac any more, especially in the face of the success of the iPhone and iPad.

'And if we've had a pause in upgrades and updates on that, we're sorry for that — what happened with the Mac Pro, and we're going to come out with something great to replace it. And that's our intention,' he says, in as clear a mea culpa as I can ever remember from Apple.

Yes, Schiller was talking about the Mac Pro, which is what the event was nominally about, but that wasn't the only Mac long in the teeth, and the ones that had been updated, particularly the laptops, were years into the butterfly keyboard catastrophe; meanwhile there was a steady-stream of new iPhones and iPads with new industrial designs and those incredible chips.

Those seemingly neglected Macs, meanwhile, were stuck with Intel, and Apple saw the Intel roadmap that has only recently become apparent to the world: it has been a map to nowhere. In 2015 Intel started shipping 14nm processors in volume from fabs in Oregon, Arizona, and Ireland; chip makers usually build fabs once per node size, seeking to amortize the tremendous expense over the entire generation, before building new fabs for new nodes. Three years later, though, Intel had to build more 14nm capacity after hiring Samsung to help it build chips; the problem is that its 10nm chips were delayed by years (the company just started shipping 10nm parts in volume this year).

Meanwhile, TSMC was racing ahead, with 7nm chips in 20171, and 5nm chip production starting this year; this, combined with Apple's chip design expertise, meant that as of last fall iPhone chips were comparable in speed to the top-of-the-line iMac chips. From Anandtech:

We've now included the latest high-end desktop CPUs as well to give context as to where the mobile is at in terms of absolute performance.

Overall, in terms of performance, the A13 and the Lightning cores are extremely fast. In the mobile space, there's really no competition as the A13 posts almost double the performance of the next best non-Apple SoC. The difference is a little bit less in the floating-point suite, but again we're not expecting any proper competition for at least another 2-3 years, and Apple isn't standing still either.

Last year I've noted that the A12 was margins off the best desktop CPU cores. This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least. The inrr castle mac os. In SPECfp2006 the A13 is still roughly 15% behind.

The Intel Core i9-9900K Processor in those charts launched at price of $999 before settling in at a street price of around $520; it remains the top-of-the-line option for the iMac for an upgrade price of $500 above the Intel Core i5-8600K, a chip that launched at $420 and today costs $220. The A13, meanwhile, probably costs between $50~$60.2

This is what made next week's reported announcement feel inevitable: Apple's willingness to invest in the Mac seems to have truly turned around in 2017 — not only has the promised Mac Pro launched, but so has an entirely new MacBook line with a redesigned keyboard — even as the cost of sticking with Intel has become not simply about money but also performance.

The Implications of ARM

The most obvious implication of Apple's shift — again, assuming the reporting is accurate — is that ARM Macs will have superior performance to Intel Macs on both a per-watt basis and a per-dollar basis. That means that the next version of the MacBook Air, for example, could be cheaper even as it has better battery life and far better performance (the i3-1000NG4 Intel processor that is the cheapest option for the MacBook Air is not yet for public sale; it probably costs around $150, with far worse performance than the A13).

What remains to be seen is just how quickly Apple will push ARM into its higher-end computers. Again, the A13 is already competitive with some of Intel's best desktop chips, and the A13 is tuned for mobile; what sort of performance gains can Apple uncover by building for more generous thermal envelopes? It is not out of the question that Apple, within a year or two, has by far the best performing laptops and desktop computers on the market, just as they do in mobile.

This is where Apple's tight control of its entire stack can really shine: first, because Apple has always been less concerned with backwards compatibility than Microsoft, it has been able to shepherd its developers into a world where this sort of transition should be easier than it would be on, say, Windows; notably the company has over the last decade deprecated its Carbon API and ended 32-bit support with the current version of macOS. Even the developers that have the furthest to go are well down the road.

Second, because Apple makes its own devices, it can more quickly leverage its ability to design custom chips for macOS. Again, I'm not completely certain the economics justify this — perhaps Apple sticks with one chip family for both iOS and the Mac — but if it is going through the hassle of this change, why not go all the way (notably, one thing Apple does not need to give up is Windows support: Windows has run on ARM for the last decade, and I expect Boot Camp to continue, and for virtualization offerings to be available as well; whether this will be as useful as Intel-based virtualization remains to be seen).

What is the most interesting, and perhaps the most profound, is the potential impact on the server market, which is Intel's bread-and-butter. Linus Torvalds, the creator and maintainer of Linux, explained why he was skeptical about ARM on the server in 2019:

Control Freak (itch) (alch505) Mac Os Free

Some people think that 'the cloud' means that the instruction set doesn't matter. Develop at home, deploy in the cloud. That's bullshit. If you develop on x86, then you're going to want to deploy on x86, because you'll be able to run what you test 'at home' (and by 'at home' I don't mean literally in your home, but in your work environment). Which means that you'll happily pay a bit more for x86 cloud hosting, simply because it matches what you can test on your own local setup, and the errors you get will translate better…

Without a development platform, ARM in the server space is never going to make it. Trying to sell a 64-bit 'hyperscaling' model is idiotic, when you don't have customers and you don't have workloads because you never sold the small cheap box that got the whole market started in the first place…

The only way that changes is if you end up saying 'look, you can deploy more cheaply on an ARM box, and here's the development box you can do your work on'. Actual hardware for developers is hugely important. I seriously claim that this is why the PC took over, and why everything else died…It's why x86 won. Do you really think the world has changed radically? Fantasmon 2 mac os.

ARM on Mac, particularly for developers, could be a radical change indeed that ends up transforming the server space. On the other hand, the shift to ARM could backfire on Apple: Windows, particularly given the ability to run a full-on Linux environment without virtualization, combined with Microsoft's developer-first approach, is an extremely attractive alternative that many developers just don't know about — but they may be very interested in learning more if that is the price of running x86 like their servers do.

Intel's Failure

What is notable about this unknown — will developer preferences for macOS lead to servers switching to ARM (which remember, is cheaper and likely more power efficient in servers as well), or will the existing x86 installation base drive developers to Windows/Linux — is that the outcome is out of Intel's control.

What started Intel's fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone; then-CEO Paul Otellini told Alexis Madrigal at The Atlantic what happened:3

'We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it,' Otellini told me in a two-hour conversation during his last month at Intel. 'The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do…At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.'

What is so disappointing about this excuse is that it runs directly counter to what made Intel great; in 1965, Bob Noyce, then at Fairchild Semiconductor4, shocked the semiconductor world by announcing that Fairchild would price its integrated circuit products at $1, despite the fact it cost Fairchild far more than that to produce them. What Noyce understood is that the integrated circuit market was destined to explode, and that by setting a low price Fairchild would not only accelerate that growth, but also drive down its costs far more quickly than it might have otherwise (chips, remember, are effectively zero marginal cost items; the primary costs are the capital costs of setting up manufacturing lines).

That is the exact logic that Otellini 'couldn't see', so blinded he was by the seemingly dominant PC paradigm and Intel's enviable profit margins.5 Worse, those volumes went to manufacturers like TSMC instead, providing the capital for research and development and capital investment that has propelled TSMC into the fabrication lead.

CORRECTION: A source suggested that this sentence was wrong:

What started Intel's fall from king of the industry to observer of its fate was its momentous 2005 decision to not build chips for the iPhone.

XScale, Intel's ARM chips, were engineered to be fast, not power-efficient, and Intel wasn't interested in changing their approach; this is particularly striking given that Intel had just recovered from having made the same mistake with the Pentium 4 generation of its x86 chips. Moreover, the source added, Intel wasn't interested in doing any sort of customization for Apple: their attitude was take-it-or-leave-it for, again, a chip that wasn't even optimized correctly. A better sentence would have read:

Intel's fall from king of the industry to observer of its fate was already in motion by 2005: despite the fact Intel had an ARM license for its XScale business, the company refused to focus on power efficiency and preferred to dictate designs to customers like Apple, contemplating their new iPhone, instead of trying to accommodate them (like TSMC).

What is notable is that doesn't change the sentiment: the root cause was Intel's insistence on integrating design and manufacturing, certain that their then-lead in the latter would leave customers no choice but to accept the former, and pay through the nose to boot. It was a view of the world that was, as I wrote, 'blinded…by the seemingly dominant PC paradigm and Intel's enviable profit margins.'

My apologies for the error, but also deep appreciation for the correction.

That is why, last month, it was TSMC that was the target of a federal government-led effort to build a new foundry in the U.S.; I explained in Chips and Geopolitics:

Taiwan, you will note, is just off the coast of China. South Korea, home to Samsung, which also makes the highest end chips, although mostly for its own use, is just as close. The United States, meanwhile, is on the other side of the Pacific Ocean. There are advanced foundries in Oregon, New Mexico, and Arizona, but they are operated by Intel, and Intel makes chips for its own integrated use cases only.

The reason this matters is because chips matter for many use cases outside of PCs and servers — Intel's focus — which is to say that TSMC matters. Nearly every piece of equipment these days, military or otherwise, has a processor inside. Some of these don't require particularly high performance, and can be manufactured by fabs built years ago all over the U.S. and across the world; others, though, require the most advanced processes, which means they must be manufactured in Taiwan by TSMC.

This is a big problem if you are a U.S. military planner. Your job is not to figure out if there will ever be a war between the U.S. and China, but to plan for an eventuality you hope never occurs. And in that planning the fact that TSMC's foundries — and Samsung's — are within easy reach of Chinese missiles is a major issue.

I think the focus on TSMC was correct, and I am encouraged by TSMC's decision to build a foundry in Arizona, even if they are moving as slowly as they can on a relatively small design; at the same time, what a damning indictment of Intel. The company has not simply lost its manufacturing lead, and is not simply a helpless observer of a potentially devastating shift in developer mindshare from x86 to ARM, but also when its own country needed to subsidize the building of a foundry for national security reasons Intel wasn't even a realistic option, and a company from a territory claimed by China was.

To that end, while I am encouraged by and fully support this bill by Congress to appropriate $22.8 billion in aid to semiconductor manufacturers (the amount should be higher), I wonder if it isn't time for someone to start the next great U.S. chip manufacturing company. No, it doesn't really make economic sense, but this is an industry where aggressive federal industrial policy can and should make a difference, and it's hard to accept the idea of taxpayer billions going to a once-great company that has long-since forgotten what made it great. Intel has prioritized profit margins and perceived lower risk for decades, and it is only now that the real risks of caring about finances more than fabrication are becoming apparent, for both Intel and the United States.

I wrote a follow-up to this article in this Daily Update.

Control Freak (itch) (alch505) Mac Os Download

  1. Node sizes are not an exact measure; most industry experts consider TSMC's 7nm node size to be comparable to Intel's 10nm size [↩]
  2. This number is extremely hard to source; but to the degree I am off it is by the tens of dollars, not hundreds [↩]
  3. I first used this quote in Andy Grove and the iPhone SE [↩]
  4. Noyce and Gordon Moore would form Intel with a large number of Fairchild employees three years later [↩]
  5. Incredibly, Otellini then doubled-down: Intel actually sold the ARM division that Jobs had wanted access to a year later. [↩]

Related

Internet everywhere ! Doesn't it sound like a dream ? A world-wide network of computers, under the wise and democratic control of the United States of America, which crosses illusory geographical boundaries and allows human intellect to unite under a single, bright banner of hope and creativity ! Who wouldn't want that to be fully integrated in his OS ? Sadly, the reality of things is a bit more complex than that, and in this article I will aim at showing what happens when 'Internet integration' goes wrong, and why I plan to be relatively conservative in this OS as far as cloudy things are concerned.

Lol, Internet !

When Internet connections started to be available at low prices to everyone, the main OS developers started to shove web-related things everywhere in their products in the name of modernity. Though the best-known remain of this era probably is the inclusion of Internet Explorer into every install of Microsoft Windows, I can pretty much guarantee without risk that everything that could be tried in the name of the web hype was tried.

  • Naming products in the name of the Internet ? Meet Apple's internet-Mac (which you probably know better as iMac).
  • Filling the desktop with ads and URLs so that it looks more like a web page ? That would be Microsoft's Windows 98, which also allowed you to use a web page as part of your desktop background.
  • Oh, and noticed that weird fake hard drive icon in the Mac OS X interface that tries to sell you something when you click on it ? That's Apple's iWeb (or 'Internet Web'), a predecessor to MobileMe and iCloud, and a successor to iTools.

In retrospect, this trend is not so surprising. First, as mentioned, the web was a new and shiny thing, so to be new and shiny every OS had to be as webby as possible. Second, the Internet was a beacon of hope for OS manufacturers who wanted to sell you not only an OS but also tons of little side-services whose total cumulative cost ends up tripling the benefits. Third, as Apple and Microsoft quickly realized in these dark ages of computing, internet connections allow you to sell blatantly unfinished software and fix it later through automated updates, blaming customers' issues on a lack of updates. All of this could be observed once again in another area of home computing recently, video game consoles, although the situation of cellphones is a little bit more complicated.

Now, all this is business as usual. If you want something that doesn't attempt to be shiny, to make maximal profit, or to ship as early as possible, but just to do what it is supposed to do and do it well, no matter the cost and delays, then the consumer society is not very well-suited for you and I welcome you to the small world of hobbyists and serious businesses. But beyond these childish games, mainstream OSs also use Internet connections for other purposes that have frightening implications from both a technical and a social point of view.

The darker side

Copy protection gone wrong

List of casinos in florida. Three periods can be relatively well separated in the history of software copy protection technologies.

In the early days, software was not protected against copying, like every other information storage media out there. You bought your software, and you could make as many copies for friends and family as you wanted. Of course, this was illegal, just like lending tape copies of records, and like with audio tapes no one cared, not even most developers. The implicit permission to make copies of the software was just part of the retail price. Scribbly walrus mac os.

Offline copy protection made its true appearance in the CD era, due to the new possibilities offered by this storage medium. Of course, variants of it existed before, but most of them, such as serial numbers or a bunch of sketches in the manual, were trivial to bypass and turned out to be more of an annoyance for users than anything else. With CDs appeared such things as voluntarily damaged disks sectors that were repeatedly read to check if the resulting data was random, hardcoded serial numbers, or even encrypted streams of data as computers grew powerful to decipher them. In the end, though, none of this lasted very long, because all of these technologies had the fundamental flaw that the user is provided with a working copy of the software, from which all there is to do is to strip out the copy protection. And encryption where you provide a decryption key on the software's storage medium is just about as effective as locking out your house but leaving the key near to the door.

This is why once Internet connections were sufficiently widespread and stable, online copy protection technologies made their appearance. In this scheme, you are not provided with a working copy of your software anymore. Your software either works in a reduced functionality mode or doesn't work at all unless some unknown piece of information is retrieved from a web server. The idea is that said server centralizes data about all installations of a software and is able to tell if a given retail copy is used multiple times, in which case it won't provide the required information. Now, of course, the devil is in the details, and one can argue that all crackers have to do is to monitor network accesses to retrieve all the information that's processed by the server, then get their disassembler of choice and use everything they have at hand to create a version of the software that doesn't need to contact the web server in order to run.

For that reason, I believe that the future of copy protection, as we already begin to see it, is software which couldn't work at all without external server assistance. Examples of this include so-called cloud computing services, where your software is run on a distant server and your computer is only a dumb terminal to said server, like OnLive or Google Apps, and software which only makes sense when it transmits information over the Internet like multiplayer games.

And this should frighten us.

Control Freak (itch) (alch505) Mac Os Windows 10

Let's say that tomorrow I buy a knife. It's a very good knife, sharp and comfortable to hold. If the company that manufactures it goes bust tomorrow, it will remain a very good knife for all its remaining useful life. Logical, isn't it ? Now, if tomorrow I buy some trendy cloudy stuff, like an OnLive game, and the company goes bust and shuts down and wipes its server, my software has become useless junk. When I try to run it, I get some error message about connection problems, and that's it. My software, and sometimes all data that I have processed in it for the most extreme approaches, is gone forever. No hacker will ever be able to retrieve it, because I don't have a working copy on my computer. Just a dumb terminal that sends input events over the web and retrieves moving pictures and audio frames.

https://downrup192.weebly.com/temple-of-the-shifting-sands-mac-os.html. I have no big issue with old-school copy protection schemes. They are annoying as hell, but I know that they will be cracked one day. Which is the way all intellectual property works : temporary monopoly on distribution, then release into public domain. Now, with the latest crop of copy protections, this contract is breached. Software and media files, in some events, can be gone forever even if there are still people who want to use or store it and have gotten their copy in time. The digital society just forgot it in an unnatural, unbelievably wrong way.

This kind of technology also has big applications in the realm of information censorship, by the way.

Software distribution monopolies

Control Freak (itch) (alch505) Mac Os Free

If we go closer to OSs in particular, we can see another case of Internet connectivity gone wrong, which is when OS manufacturers get a total and legal monopoly over what kind of software and media can be used on their platform, by only allowing software to be distributed through an online store they own under ridiculous distribution conditions. This is, of course, best embodied by Apple's App Store on the iOS platform.

In the past, mobile phones were fixed-purpose machines. They happened to use a CPU on the inside, but they were essentially the digital and long-range wireless equivalent of a landline phone. You could make calls, then send text, and some had a few extra functions like an alarm clock or a calendar. That was about it. Then, slowly but surely, mobile phones entered the multimedia age. You could put sound, images, and videos on it, change the ringtone, take photographs, and play basic Java games. After that, with OSs such as Symbian and PalmOS, cellphones entered the personal computing revolution and allowed native code to run on their platform, with full performance and maximal capabilities. Softwares could be fetched from friends, magazines, and websites for those who would bear the download times. Fun times.

However, glitches gradually appeared in that pretty picture. Phone OS vendors looked at video game consoles and thought 'Hey, I smell money down there !'. They realized that money could be made by making software developers pay a fee to access the full capabilities of the device. And so appeared stuff like Symbian Signed, where software can only do basic thing unless signing keys are bought from Symbian's vendor for a hefty price on a per-release basis. A basic feature set, which reminded of Java MIDlet, was still accessible to self-signed software, but it really didn't go very far. And when Apple, one of the biggest control freaks in computer history, introduced third-party software development for their cell phone platform in 2008, they took the thing one step further.

In the iOS ecosystem, you cannot distribute software without Apple's explicit permission, and you can only distribute it on Apple's online store. Although Apple have stated some guidelines on what kind of software will never get accepted, these guidelines also include the explicit statement that in the end, they do whatever they want. Among the kind of software that is banned on the platform are interpreters and emulators (which threaten Apple's monopoly on software distribution), nudity, and every kind of newspaper-like publication that does not put Apple in a bright light. Multiplatform code which dares to run on something else than Apple's platform is a very sensitive subject, and don't even think about redoing a part of the OS which you don't like in third party software. Besides, you must pay around a hundred dollars per year to get the iOS SDK, which may only be run on Apple computers, and any financial transaction in your software must go through Apple with a 30% cut going to the company.

Now, my answer to this kind of stuff is simple : don't buy anything running iOS. I don't own anything from Apple myself and actively discourage everyone else around me to buy their stuff. The problem is that they are not alone anymore. Other OS vendors have realized the financial benefits of such a locked-down ecosystem, and we are reaching a point where mobile platforms which allow software distribution outside of the vendor's locked down online shop could become the exception rather than the norm. The only major alive mobile OS that doesn't stand for this is Google's Android, which has other issues to say the least.

Also, it would be a good idea not to fool yourself by thinking that this will only effect mobile platforms, on which most software is arguably just funny toys. The recent arrival of Apple's Mac App Store and Microsoft's Windows Store is here to warn us that the desktop and laptop computer form factors are not intrinsically protected from this kind of monstrosities. And if you think about the possibility of installing another OS on those platforms to avoid vendor control, think twice, for UEFI's Secure Boot feature (mandated by Windows 8's Logo program, so to be bundled on all upcoming computers) is designed specifically to prevent you to do that.

Gratuitous use of web technology

Modern web technology is ugly and clunky. For its defense, it probably owes a lot of its horror to its legacy and the fact that developers try to stretch it into doing stuff which it has never been designed to do. Let's recapitulate what makes web-based software inferior to locally run native code when both can be used for a given task :

  • Web-based software frequently relies on the well-being of an internet connection and a web server, although recent additions to web standards allow one to bypass this in a few specific circumstances.
  • Web technology uses interpreted programming languages, and not even well-designed ones. As a result, performance is terrible, and the use of big and insecure interpreters is required.
  • Web standards ignore the concept of modularity. This made sense when HTML was about interactive books, but is absolute nonsense nowadays. When a web browser parses, say, HTML code, it filters its markup through the namespace of all known HTML instructions, which means a slow parsing process and a huge RAM footprint. And since the performance of web technology is terrible, web developers and browser manufacturers strive to include more and more features into the core standards, which in turns means that web browsers become bigger and slower, leading in turn more stuff to be added to the core standard in a vicious circle that will never end.
  • Web standards are not actually standard. When you build web-based stuff that's a tiny bit complex, you cannot even assume that if it works in your web browser, it will work in all other browsers, on all platforms. You have to test your stuff everywhere, in a cumbersome and time-consuming process. That's because web standards are so bloated that everyone fails at implementing them properly in a reasonable time. Markup can also be so ill-defined that everyone does its own thing, like in the recent video markup debacle where proprietary web browser manufacturers couldn't bear to agree on using the same royalty-free video formats as their competitors. In the end, what's most likely to work everywhere is ironically plugin-based technology, such as Flash or Java, which tends to be platform-specific, ill-integrated in the web browser, and rather heavy, but pretty powerful if you see beyond that.
  • There is no such thing as standard widgets and UI on the web. Everyone does his own thing, like a bunch of ants on LSD to paraphrase Jakob Nielsen. This results in the web being a huge mess from the usability point of view, as compared to native code coded by someone who has the minimal amount of mental discipline required to use a standard widget set.
  • The web is massively based on ad funding, which results in abusive advertising everywhere, from distracting animated gif banners to giant talking Flash ads that take up your whole browser window. Special mention to the one from Microsoft which I saw the other day : while I was reading a technology-related article, something animated popped in front of me and started loading a video. The beginning of that video : 'Think Hotmail is spammy ?'. Well… Yes, stupid !
  • While web technology's goals of platform independence are admirable, I have to say that it fails miserably at it as soon as you are considering a sufficiently wide range of screen size. As explained earlier, a local API can do much better in the realm of device independence. On the web, what we get is desktop websites that render poorly on phones, aside with 'iphone-specific' or 'ipad-specific' sites that don't assume their web nature and do their best to look like an iOS application. Ridiculous.

So, native code is better when you can afford it, and web technology is best left to blogs and shops. Alright. Now, what is my problem with that ? My problem is that at the other extreme of mobile nuts who code the mode stupid things in native code, there are web nuts who try to shove web technologies everywhere down to the most vital OS layers. It started with Microsoft, who wanted to justify its monopolistic inclusion of IE in Windows by making it a vital part of the Windows file explorer, in what would prove to be a horrible security mistake for the years to come, then the thing grew out of control and now we have stuff as basic as UI widgets that get rendered using a web browser, as is gradually introduced in Qt and was done previously in the Windows Vista/7 Control Panel. In Windows 8, Microsoft are also apparently seriously considering the possibility of favoring 'applications' made of nonstandard HTML, Javascript, and CSS.

Considering how bloated and insecure modern web browsers are, you decide how much of a good idea it is. Myself, I've decided. Whatever final shape my GUI toolkit takes, CSS and Javascript will not be a part of it.

To conclude…

In spite of all the bitterness I may exhibit here, I actually like the current Internet, with its blogs, forums, and e-commerce sites. It's a nice place to communicate, learn, buy and distribute stuff, read comics, etc. For all I hate current web technology, I admire what web developers manage to do with that. Working with such a horrible tool must be so painful that I would almost understand that people who got burned get so traumatized that they start to put web stuff everywhere, including in places where it really has nothing to do.

However, I am frustrated by some horrible byproducts of web technology, such as software which needs a distant server to work, excessive OS vendor control on software distribution, or gratuitous use of web technology in areas where better alternatives exist. In this OS, I aim at providing a future-proof and open-minded ecosystem in which everyone, not only a few wealthy companies, can scratch his/her own itch without any risk of censorship, which is incompatible with the first two issues. From a technical point of view, I also consider web technology to be ridiculously immature for its age, and as such will not use it except in areas where it's explicitly needed — websites, content hosting. That is, unless a sudden web standard reform surprises me by ditching the current mess and starting over from a clean slate based on sane programming principles and modern web use cases. But such a rewrite would need to gain acceptance, which would be quite unlikely…





broken image