(Meredith Whittaker) On Signal, Encryption and AI

Wired has an interview with Meredith Whittaker from Signal - her stances on Surveillance Capitalism, Signal’s not-for-profit structure and AI make for very interesting reading.

Yeah. I don’t think anyone else at Signal has ever tried, at least so vocally, to emphasize this definition of Signal as the opposite of everything else in the tech industry, the only major communications platform that is not a for-profit business.

Yeah, I mean, we don’t have a party line at Signal. But I think we should be proud of who we are and let people know that there are clear differences that matter to them. It’s not for nothing that WhatsApp is spending millions of dollars on billboards calling itself private, with the load-bearing privacy infrastructure having been created by the Signal protocol that WhatsApp uses.

Now, we’re happy that WhatsApp integrated that, but let’s be real. It’s not by accident that WhatsApp and Apple are spending billions of dollars defining themselves as private. Because privacy is incredibly valuable. And who’s the gold standard for privacy? It’s Signal.

I think people need to reframe their understanding of the tech industry, understanding how surveillance is so critical to its business model. And then understand how Signal stands apart, and recognize that we need to expand the space for that model to grow. Because having 70 percent of the global market for cloud in the hands of three companies globally is simply not safe. It’s Microsoft and CrowdStrike taking down half of the critical infrastructure in the world, because CrowdStrike cut corners on QA for a fucking kernel update. Are you kidding me? That’s totally insane, if you think about it, in terms of actually stewarding these infrastructures.

So you’re saying that AI and surveillance are self-perpetuating: You get the materials to create what we call AI from surveillance, and you use it for more surveillance. But there are forms of AI that ought to be more benevolent than that, right? Like finding tumors in medical scans.

I guess, yeah, although a lot of the claims end up being way overhyped when they’re compared to their utility within clinical settings.

What I’m not saying is that pattern matching across large sets of robust data is not useful. That is totally useful. What I’m talking about is the business model it’s contained in.

OK, say we have radiological detection that actually is robust. But then it gets released into a health care system where it’s not used to treat people, where it’s used by insurance companies to exclude people from coverage—because that’s a business model. Or it’s used by hospital chains to turn patients away. How is this actually going to be used, given the cost of training, given the cost of infrastructure, given the actors who control those things?

AI is constituted by this mass Big Tech surveillance business model. And it’s also entrenching it. The more we trust these companies to become the nervous systems of our governments and institutions, the more power they accrue, the harder it is to create alternatives that actually honor certain missions.

Just seeing your Twitter commentary, it seems like you’re calling AI a bubble. Is it going to self-correct by imploding at some point?

I mean, the dotcom bubble imploded, and we still got the Big Tech surveillance business model. I think this generative AI moment is definitely a bubble. You cannot spend a billion dollars per training run when you need to do multiple training runs and then launch a fucking email-writing engine. Something is wrong there.

But you’re looking at an industry that is not going to go away. So I don’t have a clear prediction on that. I do think you’re going to see a market drawdown. Nvidia’s market cap is going to die for a second.

On 'Simulated Worlds'

OpenAI’s video generation model Sora, in its current iteration, is incredible even though it’ll undoubtedly get better in the coming months. The post they have on their website makes for fascinating reading.

Extending generated videos. Sora is also capable of extending videos, either forward or backward in time.

Long-range coherence and object permanence. A significant challenge for video generation systems has been maintaining temporal consistency when sampling long videos. We find that Sora is often, though not always, able to effectively model both short- and long-range dependencies. For example, our model can persist people, animals and objects even when they are occluded or leave the frame. Likewise, it can generate multiple shots of the same character in a single sample, maintaining their appearance throughout the video.

Interacting with the world. Sora can sometimes simulate actions that affect the state of the world in simple ways. For example, a painter can leave new strokes along a canvas that persist over time, or a man can eat a burger and leave bite marks.

These advancements, alongside how far LLMs and other transformer-based technologies have come in the past few months, have been quite something to behold. While equal parts exciting and terrifying, it’s hard not to think about how and how much they’ll impact industries and society at large. It will likely become harder (and more time-consuming) to sift through what is a genuine advancement and not just another grift (NFTs, anyone?). Art, music, technology, video games, programming, editing, writing, law, disinformation, misinformation, capital markets, cybersecurity, democracy, and medicine will all invariably see some impact. A small part of me thinks that, as amazing as all this is right now, ‘AI’ (quotes intentional) may not be immune to enshittification—not just from the pressure to monetise but also from the unstoppable deluge of low-quality and unimaginative generated content.

On Worldcoin, DAOs and digital identity black markets

Molly White has a great essay on Sam Altman’s iris scanning orb and its purported use cases.

Much of Worldcoin’s promises are predicated on the questionable idea that highly sophisticated artificial intelligence, even artificial general intelligence, is right around the corner. It also hinges on the “robots will take our jobs!” panic — a staple of the last couple centuries — finally coming to bear. Worldcoin offers other use cases for its product too, like DAO voting, but it is not the promise to solve DAO voting that earned them a multi-billion dollar valuation from venture capitalists.

Other use cases that Worldcoin has offered seem to assume that various entities — governments, software companies, etc. — would actually want to use the Worldcoin system. This seems highly dubious to me, particularly given that many governments have established identification systems that already enjoy widespread use. Some even employ biometrics of their own, like India’s Aadhaar. There’s also the scalability question: Worldcoin operates on the Optimism Ethereum layer-2 blockchain, a much speedier alternative to the layer-1 Ethereum chain to be sure, but any blockchain is liable to be a poor candidate for handling the kind of volume demanded by a multi-billion user system processing everyday transactions.

What will happen when you promise people anywhere from $10 to $100 for scanning their eyeball? What if that’s not dollars, but denominated in a crypto token, making it appealing to speculators? And what if some people don’t have the option to scan their own eyeballs to achieve access to it?

A black market for Worldcoin accounts has already emerged in Cambodia, Nigeria, and elsewhere, where people are being paid to sign up for a World ID and then transfer ownership to buyers elsewhere — many of whom are in China, where Worldcoin is restricted. There is no ongoing verification process to ensure that a World ID continues to belong to the person who signed up for it, and no way for the eyeball-haver to recover an account that is under another person’s control. Worldcoin acknowledges that they have no clue how to resolve the issue: “Innovative ideas in mechanism design and the attribution of social relationships will be necessary.“ The lack of ongoing verification also means that there is no mechanism by which people can be removed from the program once they pass away, but perhaps Worldcoin will add survivors’ benefits to its list of use cases and call that a feature.

Relatively speaking, scanning your iris and selling the account is fairly benign. But depending on the popularity of Worldcoin, the eventual price of WLD, and the types of things a World ID can be used to accomplish, the incentives to gain access to others’ accounts could become severe. Coercion at the individual or state level is absolutely within the realm of possibility, and could become dangerous.

On Facial Recognition and Identity Proofing

Wired has a good piece on the IRS in the US caving to public outcry and ditching its integration with ID.me - a service that was supposed to verify identities (by matching video selfies to existing records). It’s understandable why this would cause concerns given that facial recognition is rife with false matches, biases and a reputation for invasiveness. With fraud being a pressing issue now when a majority of us (at least in Australia) access nearly every civic service online, governments are going to want to think about how they balance policy, privacy and messaging.

Unfortunately, the landscape at the moment is messy and populated by a number of third-party vendors still finding their feet in an area where privacy and policy concerns are outweighed by sexier usability and convenience use cases.

“The fact we don’t have good digital identity systems can’t become a rationale for rushing to create systems with Kafkaesque fairness and equity problems.” - Jay Stanley, ACLU

It’ll be interesting to see how Australia’s Trusted Digital Identity Framework (TDIF) will look to address some of these inherent problems through a continuous expansion of its standards.

On web3

(Note to self)

Moxie has some great thoughts on web3 even though what it really means depends on who you ask.

NFTs:

Instead of storing the data on-chain, NFTs instead contain a URL that points to the data. What surprised me about the standards was that there’s no hash commitment for the data located at the URL. Looking at many of the NFTs on popular marketplaces being sold for tens, hundreds, or millions of dollars, that URL often just points to some VPS running Apache somewhere. Anyone with access to that machine, anyone who buys that domain name in the future, or anyone who compromises that machine can change the image, title, description, etc for the NFT to whatever they’d like at any time (regardless of whether or not they “own” the token). There’s nothing in the NFT spec that tells you what the image “should” be, or even allows you to confirm whether something is the “correct” image.

web3 and Platforms (web2):

The people at the end of the line who are flipping NFTs do not fundamentally care about distributed trust models or payment mechanics, but they care about where the money is. So the money draws people into OpenSea, they improve the experience by building a platform that iterates on the underlying web3 protocols in web2 space, they eventually offer the ability to “mint” NFTs through OpenSea itself instead of through your own smart contract, and eventually this all opens the door for Coinbase to offer access to the validated NFT market with their own platform via your debit card.

At the end of the stack, NFT artists are excited about this kind of progression because it means more speculation/investment in their art, but it also seems like if the point of web3 is to avoid the trappings of web2, we should be concerned that this is already the natural tendency for these new protocols that are supposed to offer a different future.

Clients and Servers in the Decentralised Web:

One thing that has always felt strange to me about the cryptocurrency world is the lack of attention to the client/server interface. When people talk about blockchains, they talk about distributed trust, leaderless consensus, and all the mechanics of how that works, but often gloss over the reality that clients ultimately can’t participate in those mechanics. All the network diagrams are of servers, the trust model is between servers, everything is about servers. Blockchains are designed to be a network of peers, but not designed such that it’s really possible for your mobile device or your browser to be one of those peers.

With the shift to mobile, we now live firmly in a world of clients and servers – with the former completely unable to act as the latter – and those questions seem more important to me than ever. Meanwhile, ethereum actually refers to servers as “clients,” so there’s not even a word for an actual untrusted client/server interface that will have to exist somewhere, and no acknowledgement that if successful there will ultimately be billions (!) more clients than servers.

The increasing complexity of creating software:

At this point, software projects require an enormous amount of human effort. Even relatively simple apps require a group of people to sit in front of a computer for eight hours a day, every day, forever. This wasn’t always the case, and there was a time when 50 people working on a software project wasn’t considered a “small team.” As long as software requires such concerted energy and so much highly specialized human focus, I think it will have the tendency to serve the interests of the people sitting in that room every day rather than what we may consider our broader goals. I think changing our relationship to technology will probably require making software easier to create, but in my lifetime I’ve seen the opposite come to pass. Unfortunately, I think distributed systems have a tendency to exacerbate this trend by making things more complicated and more difficult, not less complicated and less difficult.

On the SolarWinds Breach

Where to begin. It’s almost impossible to comprehend what the fallout of this breach will be in the immediate to medium term; in fact, there isn’t enough information out there yet to conduct an effective post-mortem so to speak.

One thing is for certain - organisations are going to be wary of trusting ‘technology solutions’ from vendors. This isn’t to say that SolarWinds (and FireEye) did not have adequate measures in place; just that breaches are inevitable and organisations that rely on technology vendors are also dependent on these vendors having adequate controls in place alongside stringent self-audits. Organisations out there that trusted SolarWinds to push Orion to their networks in effect trusted that SolarWinds had a strong handle on their security posture. In Australia, CPS 234 mandates that APRA-regulated entities will need to have information security measures in place and includes cybersecurity assessments by independent assessors. Obviously, the effectiveness will boil down to the thoroughness of the assessor.

All this gets more complicated when you start to look into how the breach occurred in the first instance. It is starting to increasingly seem like hackers leveraged widely-used protocols and solutions. Ars Technica has a fascinating report referencing security firm Volexity, who encountered the same attackers in 2019; at the time, they bypassed MFA protections for Microsoft Outlook Web App (OWA) users.

Toward the end of the second incident that Volexity worked involving Dark Halo, the actor was observed accessing the e-mail account of a user via OWA. This was unexpected for a few reasons, not least of which was the targeted mailbox was protected by MFA. Logs from the Exchange server showed that the attacker provided username and password authentication like normal but were not challenged for a second factor through Duo. The logs from the Duo authentication server further showed that no attempts had been made to log into the account in question. Volexity was able to confirm that session hijacking was not involved and, through a memory dump of the OWA server, could also confirm that the attacker had presented cookie tied to a Duo MFA session named duo-sid.

Krebs on Security has a great write-up with a sobering quote from the DHS’s Cybersecurity and Infrastructure Security Agency.

CISA’s advisory specifically noted that “one of the principal ways the adversary is accomplishing this objective is by compromising the Security Assertion Markup Language (SAML) signing certificate using their escalated Active Directory privileges. Once this is accomplished, the adversary creates unauthorized but valid tokens and presents them to services that trust SAML tokens from the environment. These tokens can then be used to access resources in hosted environments, such as email, for data exfiltration via authorized application programming interfaces (APIs).”

The CISA goes on to advise that if an org identifies ‘SAML abuse’, mitigating individual issues might not be enough; you’ll need to consider the entire identity store as compromised. And unfortunately, the only remedy to that is building back identity and trust services from the ground up.

Additional Reading:

VMware Flaw a Vector in SolarWinds Breach?

SolarWinds hackers have a clever way to bypass multi-factor authentication

FireEye Threat Research

On the ‘Sign In With Apple’ Takeover Flaw

So, the ‘Sign in with Apple’ feature had a significant vulnerability that’s been thankfully patched. An independent bug bounty hunter, Bhavuk Jain has posted up a detailed take on how he found the vulnerability and reported it to Apple. It’s a pretty fascinating read.

My favourite Apple Sign-In feature has been how you can choose to mask or not share your email ID with a third-party during the authentication process (bye unsolicited emails!). When the email address is hidden, Apple generates a JSON Web Token (or JWT (or JOT if you’re that guy) which is a standard to transmit claims securely between two parties) that is then used by the third-party app to authenticate the user. Bhavuk found that the payload returned by Apple included a URL accessible on Apple’s servers to which he could send just an email address (any email address) and could get authenticated without a password. Apple essentially sent back a valid authentication token that could be used with the third-party app.

This was no doubt a glaring flaw no matter how you slice it. OpenID Connect and OAuth consent flow standards exist for a reason - no matter how excellent your engineers are or how sophisticated your own spin on existing standards are, the risks of rolling your own authentication is pretty damn high.

On COVIDSafe

The Australian government rolled out their contact tracing app, COVIDSafe through the Apple and Android app stores at 6 last evening. Understandably, there has been quite a bit of apprehension leading up to it but the it’s starting to sound like there might be a cautious uptake of the app among the general public given that 1 million people downloaded it in the last 12 hours. If the AFR’s readership (possibly right-leaning and more trusting of the current government) is a reliable metric, the consensus is that the app may eventually help ease restrictions in place.

Adjustments.jpeg

A few things:

  • It’s an app with a purportedly singular purpose - contact tracing. It’ll use low energy Bluetooth to exchange a handshake between two devices that are 1.5 m apart for over 15 minutes.

  • The only data that’s collected are a name or pseudonym, age range and your phone number. I realize that you’ll need phone numbers to get in touch with individuals who may have spent time with a diagnosed person but having that unique identifier in the database negates any perceived benefit of using a pseudonym. An alternative would be notify people of proximity to confirmed cases through just device notifications; granted, it’ll be harder to ensure quarantine or isolation measures.

  • ”Not even a court order can penetrate the law, not even a court order, or the investigation of an alleged crime would be allowed to use [the data]”.

  • There’s two levels of consent to go through registration and they’re quite detailed - it’s harder to implement something stricter without sacrificing usability and uptake.

  • The data store is hosted on AWS instances in ACT - assume they’ll use the same protected instances that the government already use.

  • The data stored on devices is held on for rolling period of 21 days much like how you can configure browsing history deletion on Google’s My Activity page.

  • It’s a bit unclear what permissions and privileges the ‘COVIDSafe Administrator’ will have but I assume quite a bit given they can delete registration data.

  • The punishments for misuse of app data are quite onerous - 5 years jail time and $63,000 in fines.

  • Law firm Maddocks conducted a 48 page privacy assessment and made 19 recommendations which all seem to have been accepted.

  • Source code will be released over the coming days.

  • Singapore’s version of the app, the source code for which has been leveraged for COVIDSafe, hasn’t quite seen the uptake you’d want for something like this. Approximately 20% of Singapore residents have downloaded the app and Australia wants at least 40% of the population to download it for it to be useful.

The debate over this has been lively and has no doubt contributed to the way the app has been rolled out - fingers crossed that every safeguard in place is meaningfully enforced.

Links:

AFR: 'Privacy by design' approach for COVIDSafe app

The Verge: Why Bluetooth apps are bad at discovering new cases of COVID-19

ITNews: Australia's COVID tracing app better than Singapore's: Health chief

On The Lost Cult of Cinema

The excellent Jake Wilson, who writes for The Age, has penned quite a moving essay on the loss of cinemas and the communal experience of movie-going that is especially felt now. High definition streaming really is no substitute for the tactility of sitting down on uncomfortable seats with fellow film-loving patrons.

Saint Maud at Lido Cinemas in Hawthorn

In this sense, the cinema I’ve valued most since childhood has long been a waning cult, kept alive by a circle of devotees. If that sounds mystical, I’m not about to argue. Movie theatres are traditionally adorned like temples, and in my eyes that’s what they are, or should be: places designed for the conjuring of visions that connect us to something larger than ourselves.

Putting the technological specifics to one side, perhaps the cult of cinema isn’t so unique. All of us at present face the loss of our particular temples, where we meet in the flesh with others who dream the same dream. What these cumulative losses will mean for society remains, for now, hard to say. Soon, we will begin to find out, stuck at home with our Netflix queues and our faltering internet speeds, awaiting the moment when our city can breathe again.

Source: Temple of doom? A film critic mourns the lost cult of cinema.

Digital Identities & Remote Working

Well, 2020 isn’t quite what we thought it’d be.

A colleague and I wrote a piece for KPMG’s Newsroom around managing digital identities securely especially now that a majority of knowledge workers connect remotely. The piece was meant for a broader audience but I still hope there are some tangible takeaways if you work in or lead security teams.

—————-

All of us have been taken aback by the rate at which the ongoing crisis has escalated and the public health, social and economic impact it has had on millions of people around the world. Given its unprecedented nature, organisations across industries in Australia have had their existing continuity protocols and technology stacks stress-tested at scales that were never anticipated.

Organisations would do well to revisit their Identity and Access Management (IAM) frameworks and solutions for both critical and non-critical applications especially given that it is not out of the realm of possibility that passwords across systems could be similar. With password hygiene being a prevalent issue, external users (consumer and citizen identities) with email address as credentials can fall victim to password spraying attacks that, while not particularly sophisticated, are effective.

As more people work remotely and our collective anxiety increases, it is not entirely unexpected that cyber attacks and phishing scams will exponentially increase over the coming weeks and months.

From an IAM perspective, there are a few areas that organisations could potentially look at to shore up their existing systems securely:

1. Make changes to the IAM layer

Organisations may invariably compromise on security in their network layer to ensure that their employees have access to critical and non-critical applications remotely. A secure IAM platform will act as a compensating control in such instances. Making changes to the Open Systems Interconnection (OSI) model gets increasingly difficult as we go from top to bottom and making relevant changes to the IAM layer is easier to implement while being architecturally sound.

2. Secure cloud solutions

Single Sign-On (SSO) to cloud applications will be the norm for many organisations that empower their employees to work remotely; however, convenience can come at the cost of cloud security. While MFA is by no means perfect, using it for high-risk transactions or privileged users can reduce the surface area for attacks.

3. Secure Privileged Accounts

Securing privileged accounts with MFA using software or hardware tokens is easily achievable and will go a long way in ensuring threats are minimised especially as remote-working becomes the norm for the foreseeable future.

4. Provide more self-service functions

Securing help desks will continue to prove difficult so help staff help themselves with self-service function that are secured by 2-step authentication mechanisms.

Across the board, organisations will relax their security protocols to ensure their workers have frictionless access to their accounts and applications for working remotely, which will provide new surface areas for attacks. Having established and standards-based solutions that use protocols like SAML, OAuth 2.0 and OpenID Connect may provide a sense of assurance and flexibility especially when it comes to ease of securely onboarding new applications.

Many cyber-attacks and breaches, including some high-profile ones here in Australia, go weeks or even months without being detected and there are never all encompassing fixes or answers that solve everything but standards-based approaches are a dependable way of approaching these confusing times.

Surveillance Capitalism

Shoshana Zuboff’s ‘The Age of Surveillance Capitalism’ is just an excellent resource if you’re trying to understand exactly what it is that is at stake with the overreaching and attractively-packaged business models of products from companies like Facebook, Google, Apple and Amazon.

It is chilling and frankly, the tone of the book can come across as apocalyptic but it’s thoroughly researched and strongly argued.

”In this new regime, objectification is the moral milieu in which our lives unfold. Although Big Other can mimic intimacy through the tireless devotion of the One Voice — Amazon Alexa’s chirpy service, Google Assistant’s reminders and endless information — do not mistake these soothing sounds for anything other than the exploitation of your needs. I think of elephants, the most majestic of all mammals: Big Other poaches our behaviour for surplus and leaves behind all the meaning lodged in our bodies, our brains, and our beating hearts, not unlike the monstrous slaughter of elephants for ivory. Forget the cliche that if it’s free, “You are the product.” You are not the product; you are the abandoned carcass. The “product” derives from the surplus that is ripped from your lives.” (P 377)

”When I speak to my children of or an audience of young people, I try to alert them to the historically contingent nature of “the thing that has us” by calling attention to ordinary values and expectations before surveillance capitalism began its campaign of psychic numbing. “It is not OK to have to hide in your own life; it is not normal,” I tell them. “It is not OK to spend your lunchtime conversations comparing software that will camouflage you and protect you from continuous unwanted invasion.”” (P 521)

I cannot recommend the book enough - there have been some great books on Big Tech these last few years but Zuboff’s tome is just alarmist enough for people to panic.

(On a related note, during my recent trip to India and Sri Lanka, I found it quite unnerving how people have have become nonchalant so quickly about having their faces captured on phones or are willing to provide copies of their national identity cards to pretty much anyone who can get you a good deal on mobile phone SIM cards.)

Apple’s SSO

Of all the things Apple announced yesterday, the most interesting one was a small feature they casually dropped - an Apple-branded SSO or their own version of a ‘social login’. While it seems similar to the Facebook or Google login buttons you see on many sites, this third-party login doesn’t share your email address with the service providers you use it to register with and goes so far as to generate and maintain separate unique alias email addresses.

The service providers get access to the relevant information they need to provide you with whatever service they offer but it’s up to you to share your name or email address. At face value, this is almost too good to be true because it shifts the locus of control to the user. In true Apple fashion, they’ve made this mandatory for developers that use third-party logins.

I’m cautiously optimistic and excited about Apple’s pivot to a privacy-conscious organisation but as with all these things, the detail is in the fine print.

Books, 2019

There have been some great books these last few years that I’ve finally managed to get around to. Here are some of my favourites so far.

Eating Animals by Jonathan Safron Foer - Ever since we got a dog, I’ve been somewhat queasy about eating meat, especially the four-legged variety. While the book can put off the more avid meat-eaters among us, there are some compellingly documented reasons to be wary of of factory farming, the cruelty it inflicts on animals and it’s impact on the environment. There’s an interesting counterpoint to ‘nature is cruel’ argument as well that I found particularly insightful. If anything, the book has been directly responsible for me giving up KFC and staying away from lamb, beef and pork over the last few months.

The People vs Tech by Jamie Bartlett - Technology giants are an easy target and writing about evil algorithms and smartphone addiction is in vogue and lucrative; however, the point needs to be hammered home. I found Bartlett’s book a bit too on-the-nose but still riveting - he paints a picture of how we got here and how our reverence for technology has blinded us to it’s impact on democracy. It’s bleak and provides little hope much like another book I read this year.

The Uninhabitable Earth by David Wallace Wells - This is both one of the best and worst books I’ve read in the last few years. Wells paints a bleak picture of the horrors of climate change and how we’re programmed to ignore slow decimation. He tackles the unfairness of climate change - the countries that shoulder the most ‘climate guilt’ will see the least impact and the best case scenario that he (and the IPCC) paints isn’t ideal - a 2.5 degrees increase is still catastrophic - but the worst case scenario is downright depressing. There are some optimistic takeaways but they’re mostly relegated to the last few pages. I’d say this was essential reading especially for those that believe the invisible hand of the market or increased awareness will save us.

Exhalation by Ted Chiang - Chiang returns with a book of science fiction short stories (following Stories of Your Life) that are entertaining and mind-expanding. There are some genuinely excellent stories here especially, ‘The Lifecycle of Software Objects’ and ‘Anxiety is the Dizziness of Feeling’.

The Overstory by Richard Powers - This book is full of such rich detail, especially the first half where it paints elaborate portraits of its varied protagonists. As with most books I’ve read this year, the environment and our impact on it is a major theme.

On Disconnecting from Social Media

I’ve flirted with the idea of scrubbing my social media presence for a while now - aside from being a time-suck, we’re entering an era where these giant conglomerates aren’t just ethically ambiguous anymore, they’re starting to seem downright morally bankrupt.

Jaron Lanier has a new book where he outlines why everyone should delete their social media accounts (right now!) that I found particularly good at articulating why I eventually bit the bullet and deleted all my social media profiles.

  • Argument 1: You are losing your free will.

  • Argument 2: Quitting social media is the most finely targeted way to resist the insanity of our times.

  • Argument 3: Social media is making you into an asshole.

  • Argument 4: Social media is undermining truth.

  • Argument 5: Social media is making what you say meaningless.

  • Argument 6: Social media is ruining your capacity for empathy.

  • Argument 7: Social media is making you unhappy.

  • Argument 8: Social media doesn’t want you to have economic dignity.

  • Argument 9: Social media is making politics impossible.

  • Argument 10: Social media hates your soul.

Recommended reading: Do You Have a Moral Duty to Leave Facebook?

On Devices and Behavior

With the recent Apple Watch and iPhone announcements, a couple of things stand out. There’s a pronounced way our behavior is being molded by these products.

Apple Watch Series 4 comes with the ability to take ECG measurements. While this sounds immensely useful, my wife pointed out this would potentially contribute to self-diagnoses and anxiety.

"Do you wind up catching a few undiagnosed cases? Sure. But for the vast majority of people it will have either no impact or possibly a negative impact by causing anxiety or unnecessary treatment," says cardiologist Theodore Abraham, director of the UCSF Echocardiography Laboratory. The more democratized you make something like ECG, he says, the more you increase the rate of false positives—especially among the hypochondriac set.

THE NEW ECG APPLE WATCH COULD DO MORE HARM THAN GOOD

Another annoying outcome is the death of the small phone; Apple seems to have all but given up on the SE series, which in hindsight, was the perfect size for a phone. We’re left with a slew of devices that are clumsy and awkward to hold and use.

And not just hands. Bigger phones take up more pocket and purse real estate. They strain your thumbs and stress your jeans. They’re more frustrating to run with. They demand both hands to operate. They also arguably require more mental space; the larger the screen, the more you do with it, and the more easily it becomes the locus of your daily life.

THERE ARE NO MORE SMALL PHONES

It Takes Two (To Thwart Data Breaches)

Some interesting insight from Gemalto's 2017 Data Breaches and Customer Loyalty Report:

  • Of the 10,000 consumers interviewed, only 27% feel businesses take customer data security seriously
  • 70% would take their business elsewhere following a breach
  • 41% fail to take advantage of available security measures available such as multi-factor authentication
  • 56% use the same password for multiple online accounts

While consumers are rightfully skeptical of the security hygiene of businesses they interact with, there is certainly a role for consumers to play here. 

 

Krebs on IoT Vulnerabilities

Brian Krebs has some interesting insight into this past weekend's DDoS attack on Dyn, an internet infrastructure company that provides services for some of the web's biggest destinations including Twitter, Amazon, Reddit and Netflix.

At first, it was unclear who or what was behind the attack on Dyn. But over the past few hours, at least one computer security firm has come out saying the attack involved Mirai, the same malware strain that was used in the record 620 Gpbs attack on my site last month. At the end September 2016, the hacker responsible for creating the Mirai malware released the source code for it, effectively letting anyone build their own attack army using Mirai.

Mirai scours the Web for IoT devices protected by little more than factory-default usernames and passwords, and then enlists the devices in attacks that hurl junk traffic at an online target until it can no longer accommodate legitimate visitors or users.

...

The wholesalers and retailers of these devices might then be encouraged to shift their focus toward buying and promoting connected devices which have this industry security association seal of approval. Consumers also would need to be educated to look for that seal of approval. Something like Underwriters Laboratories (UL), but for the Internet, perhaps.

Until then, these insecure IoT devices are going to stick around like a bad rash — unless and until there is a major, global effort to recall and remove vulnerable systems from the Internet. In my humble opinion, this global cleanup effort should be funded mainly by the companies that are dumping these cheap, poorly-secured hardware devices onto the market in an apparent bid to own the market. Well, they should be made to own the cleanup efforts as well.

The upside here is that IoT manufacturers and vendors will now have to wisen up to the fact that they have more to gain from secure devices and a lot to lose from a repeat of this weekend's events.

On Aesthetic Diversity (or lack thereof)

While sifting through Airbnb for our upcoming honeymoon, we noticed that apartments in Tokyo and Kyoto looked noticeably similar to the ones we've stayed at in Australia and even, Hawaii. You're also likely to notice that with local cafes and restaurants - exposed walls, raw wood tables and brushed ceramic cups. The Verge has a surprisingly insightful piece on the phenomenon.

"As an affluent, self-selecting group of people move through spaces linked by technology, particular sensibilities spread, and these small pockets of geography grow to resemble one another, as Schwarzmann discovered: the coffee roaster Four Barrel in San Francisco looks like the Australian Toby’s Estate in Brooklyn looks like The Coffee Collective in Copenhagen looks like Bear Pond Espresso in Tokyo. You can get a dry cortado with perfect latte art at any of them, then Instagram it on a marble countertop and further spread the aesthetic to your followers."

(...)

"The connective emotional grid of social media platforms is what drives the impression of AirSpace. If taste is globalized, then the logical endpoint is a world in which aesthetic diversity decreases. It resembles a kind of gentrification: one that happens concurrently across global urban centers. Just as a gentrifying neighborhood starts to look less diverse as buildings are renovated and storefronts replaced, so economically similar urban areas around the world might increasingly resemble each other and become interchangeable."

On Deep Work

“An even more extreme example of a onetime grand gesture yielding results is a story involving Peter Shankman, an entrepreneur and social media pioneer. As a popular speaker, Shankman spends much of his time flying. He eventually realized that thirty thousand feet was an ideal environment for him to focus. As he explained in a blog post, “Locked in a seat with nothing in front of me, nothing to distract me, nothing to set off my ‘Ooh! Shiny!’ DNA, I have nothing to do but be at one with my thoughts.” It was sometime after this realization that Shankman signed a book contract that gave him only two weeks to finish the entire manuscript. Meeting this deadline would require incredible concentration. To achieve this state, Shankman did something unconventional. He booked a round-trip business-class ticket to Tokyo. He wrote during the whole flight to Japan, drank an espresso in the business class lounge once he arrived in Japan, then turned around and flew back, once again writing the whole way—arriving back in the States only thirty hours after he first left with a completed manuscript now in hand. “The trip cost $4,000 and was worth every penny,” he explained.” 

- Cal Newport, Deep Work: Rules for Focused Success in a Distracted World