Jekyll2023-05-25T13:44:44+00:00https://joepetrich.com/feed.xmljpetrich.github.iojpetrich's personal website.The race to the bottom on NFT royalties2022-11-06T00:00:00+00:002022-11-06T00:00:00+00:00https://joepetrich.com/2022/11/06/nft-royalties<p>If you pay any attention to the web3 landscape, you may have seen commotion recently about royalties. One of the most attractive things about NFTs for creators is the expectation that the creator of an NFT collection receives royalties on all sales of the NFT. Whereas in the traditional art world it’s impossible for an artist to monetize the increase in value of their older works, with NFTs, an artist’s earnings continue with each secondary sale, and are set at a percentage of the sale price.</p>
<p>But how do NFT royalties work? While there is a <a href="https://eips.ethereum.org/EIPS/eip-2981">standard</a> for defining royalties, its implementation is unenforceable from the point of view of the creator. This is because trading of NFTs is done through marketplace contracts like <a href="https://opensea.io">OpenSea’s</a> <a href="https://opensea.io/blog/announcements/introducing-seaport-protocol/">Seaport</a>. In practice, this means that royalties are set up with each marketplace separately, and creators trust the marketplaces to enforce royalties on each transaction. Collectors put up with this until recently, because OpenSea is the dominant NFT marketplace and respects royalties. However, newer marketplaces have realized that royalties are an area of competition, so there has been a race to the bottom, with <a href="https://magiceden.io/">Magic Eden</a>, <a href="https://x2y2.io/">X2Y2</a>, and <a href="https://blurr.cc/">Blur</a> making royalties optional for purchasers, and <a href="https://looksrare.org/">LooksRare</a> reducing royalties to a fixed 2% across the platform, with only a quarter of that going to creators.</p>
<p>While this race to the bottom was inevitable, the question now is how to continue to make NFTs a valuable means of income for creators and artists. Several approaches are being debated across the web3 community, including:</p>
<ul>
<li>
<p>Creators should just charge more for the initial mint, or reserve a number of NFTs for themselves to sell later, if their work appreciates in value. This works for new collections, but doesn’t help creators with existing contracts.</p>
</li>
<li>
<p>Royalties should be optional, but someone should track who chooses to honor them, and the community will reward those who pay royalties over time. This is optimistic and doesn’t guarantee any income for creators.</p>
</li>
<li>
<p>Creators should choose to block marketplaces that don’t honor royalties from interacting with their NFT contracts. This will work for future contracts, but seems to be opposed to decentralization, and benefits OpenSea directly unless other marketplaces go back to requiring royalties.</p>
</li>
</ul>
<p>What do you think the best way forward is? Reach out on <a href="https://www.farcaster.xyz/">Farcaster</a>, <a href="https://twitter.com/jpetrichsr">Twitter</a>, or via <a href="mailto:joe@petrich.xyz">email</a> and let me know! I plan on sharing my thoughts in the next few weeks.</p>If you pay any attention to the web3 landscape, you may have seen commotion recently about royalties. One of the most attractive things about NFTs for creators is the expectation that the creator of an NFT collection receives royalties on all sales of the NFT. Whereas in the traditional art world it’s impossible for an artist to monetize the increase in value of their older works, with NFTs, an artist’s earnings continue with each secondary sale, and are set at a percentage of the sale price.Pseudorandom Generators2022-02-07T00:00:00+00:002022-02-07T00:00:00+00:00https://joepetrich.com/2022/02/07/pseudorandom-generators<p>This is my second post related to <a href="https://mit6875.github.io/">MIT 6.875 - Foundations of Cryptography</a>. My motivation for going through this course is in the <a href="https://joepetrich.com/2022/01/19/perfect-secrecy/">first post</a>.</p>
<h2 id="pseudorandom-generators">Pseudorandom Generators</h2>
<p>While perfect secrecy is only achievable through a one-time pad, which is infeasible for most
communication, if we put a constraint on the adversary trying to decipher an encrypted message, we
can achieve secrecy with less effort. The constraint modern cryptography puts on the adversary is to
limit the adversary’s computational power. We don’t need to limit our adversary too much to accomplish
practical secrecy, but, we do need to take for granted something widely-accepted yet never proven -
that <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem">P != NP</a>. Without getting into the weeds of
computational complexity theory, the idea is that there is a class of problems that are only solvable
in a categorically more computationally intensive manner than others, such that it would take massive
leaps in either the theory of computer science, or methods of computation, or both, to solve them.
What this idea allows us to do is to define precisely the idea of negligibility.</p>
<h3 id="negligibility">Negligibility</h3>
<p>The idea of negligibility in this context is similar to that of its plain-English meaning. The basic
idea is that if an adversary has enough computational power to make n guesses at our encrypted
message, the probability of success is 1/n. That is, the probability of successfully cracking the
encryption must be so low that it seems impossible. What this allows us to do is to broaden our
definition of secrecy somewhat to give us a practical way to accomplish secure cryptographic schemes,
given realistic computational constraints. Whereas in a perfectly secret scheme, the attacker has a 1/2 probability of guessing each bit of the original message, we can define a principle of computational
indistinguishability that will allow for practical secrecy.</p>
<h3 id="computational-indistinguishability-and-pseudorandom-generators">Computational Indistinguishability and Pseudorandom Generators</h3>
<p>An encryption scheme has computational indistinguishability if the probability of an attacker computing each bit of the original message from an encrypted message is negligibly more than 1/2. How can we take advantage of this idea to make cryptography practical? Remember that a one-time pad, the perfectly secret method of encryption, relies on a truly random sequence of bits at least as long as the message to encrypt. What if we designed a function to generate a string of N bits from a random seed of a few bits, such that the resulting string of bits is computationally indistinguishable from a truly random string? We could then use this generated string to encrypt a message, shorter than N bits, and the attacker, limited not by theory, but practical computational limits, would be unable to decrypt the message.</p>
<p>Modern cryptography uses this approach. Pseudorandom generators, that can’t be reverse engineered in polynomial time, are used to generate keys. If P != NP, then pseudorandom numbers are negligibly less secure than truly random numbers, and the resulting encryption functions will have computational indistinguishability, and therefore practical secrecy. Unfortunately, since P != NP hasn’t been proven, it’s not possible to prove that any such cryptographic scheme is theoretically secure. Since no one has proved P=NP though, cryptographers assume that modern cryptography is valid (or at least good enough for the internet!) and focus their efforts on defining functions that can act as pseudorandom generators. This topic of pseudorandom generators will lay the groundwork for the rest of the foundations of cryptography course.</p>This is my second post related to MIT 6.875 - Foundations of Cryptography. My motivation for going through this course is in the first post.Perfect Secrecy2022-01-19T00:00:00+00:002022-01-19T00:00:00+00:00https://joepetrich.com/2022/01/19/perfect-secrecy<p>I’ve been interested in cryptocurrencies for several years, and <a href="https://joepetrich.com/2021/07/24/hack-money-hackathon/">recently got into the technical
side of them</a>, learning to build on top
of Ethereum. As I learned more, my interest in the foundation of cryptocurrencies increased,
and so I read a couple of introductions to cryptography - <a href="https://en.wikipedia.org/wiki/Crypto_(book)">Crypto</a>
and <a href="https://en.wikipedia.org/wiki/The_Code_Book">The Code Book</a>. Learning about the cryptography
throughout the centuries, culminating in information theory and the battle around public key
cryptography and a secure internet in the late 20th century, made me want to learn more about the
fundamental math and computer science behind the cryptography that enables a secure internet,
decentralized blockchains, and self sovereignty. To that end, I decided to work through
<a href="https://mit6875.github.io/">MIT 6.875 - Foundations of Cryptography</a> which has lectures, papers,
and homework all available on GitHub. <a href="https://skitter-naranja-b42.notion.site/01aed573d0384dc090ce5d07004aaf82?v=9dc0c1ffe29c4d04af98c3e01e4f56b0">I created a schedule for myself to work through it in a
year</a>,
<a href="https://twitter.com/jpetrichsr/status/1480689628860624899?s=20">invited anyone to join me</a>, and
started <a href="https://discord.gg/h8C7UP8HzP">a small Discord</a> for discussing the lectures. The course
is technical, requiring a background in mathematical proofs and computer science, but the
concepts are broadly interesting and relevant to anyone interested in the foundations of secure
communication. I’ll be writing a few pieces here about the course material to help myself - what
better way to learn than to teach? - and hopefully interest more people in ‘the crypto behind
crypto.’</p>
<h2 id="perfect-secrecy">Perfect Secrecy</h2>
<p>At the foundation of cryptography is the idea of secrecy - how can Alice send a message to Bob
without Eve the eavesdropper understanding the message? When designing a cryptographic system, we
must assume that Eve can intercept the encrypted message - the ciphertext - and process it however
she pleases. In <a href="https://en.wikipedia.org/wiki/Communication_Theory_of_Secrecy_Systems">Communication Theory of Secrecy Systems</a>, Claude Shannon outlined an information-theory approach to cryptography using this idea. He outlined three algorithms in the system:</p>
<ol>
<li>Key Generation creates a key to be used in encryption and decryption.</li>
<li>Encryption takes a plaintext message and a key, and generates a ciphertext by combining them somehow.</li>
<li>Decryption takes a ciphertext and a key, and returns the plaintext.</li>
</ol>
<p>Alice and Bob meet at some point, and use the key generation algorithm to generate a shared key.
Eve has access to all three of these algorithms, as well as the ciphertext, but wasn’t there when
Alice and Bob generated the key. Because she knows the key generation algorithm, it’s important
that it be <em>random</em>. If it is deterministic, Eve will be able to generate the key that Alice and
Bob are using, and get the plaintext just like Bob can. The perfect cryptographic system would
guarantee what Shannon called “Perfect Secrecy,” where, no matter how much computing power Eve
has, or how sophisticated an algorithm, her chance at guessing the plaintext from the ciphertext
<em>is the same as if she didn’t even have the ciphertext.</em> Put differently, having the ciphertext
gives Eve no information about the plaintext.</p>
<h3 id="is-perfect-secrecy-achievable">Is Perfect Secrecy Achievable?</h3>
<p>Is perfect secrecy achievable? Yes, with a one-time pad. A one-time pad is a key, longer than the message, that is only used once. The one-time pad is used as follows:</p>
<ol>
<li>Generate the one-time pad by generating as many bits as necessary randomly.</li>
<li>Take your message (in binary) and <a href="https://en.wikipedia.org/wiki/Exclusive_or">exclusive-or</a>
it with the one-time pad. This means you take each bit one by one and compare it. 1 XOR 1 = 0,
XOR 0 = 1, 0 XOR 0 = 0, 0 XOR 1 = 1.</li>
<li>On the decryption side, XOR the key with the ciphertext. As you can see, XOR reverses the
encryption perfectly.</li>
</ol>
<p>The randomness of the one-time pad is paramount, because any pattern used in its generation could
be replicated by Eve to generate the same key. If it is completely random, the ciphertext will be
completely random, and Eve will have no way to get information from it. However, if the one-time
pad is reused, the system is no longer secure! Two ciphertexts generated from the same key can be
XOR’d together, and the result is the XOR of the two plaintexts. This is because (Ciphertext A)
XOR (Ciphertext B) = (Plaintext A XOR Key) XOR (Plaintext B XOR Key), and the Key XOR Key cancels
each other out. Since the plaintexts are not generated randomly, the XOR of the two provides Eve
some information, violating perfect secrecy.</p>
<h3 id="what-is-the-cost-of-perfect-secrecy">What is the Cost of Perfect Secrecy?</h3>
<p>Perfect secrecy requires the use of a one-time pad, which, as you can imagine, is infeasible.
Alice and Bob will need to generate as many one-time pads as they will ever need to exchange
messages, and each key will need to be as large as or larger than the message it is used to
encrypt. Does every web browser come with a one-time pad large enough to secure all your
communication with all websites for as long as you might use the internet? Of course not! While
perfect secrecy is only achievable only with great effort, we can define bounds on Eve’s
computational resources. With these bounds defined, we only need to prove that our cryptographic
system is unsolvable within those bounds. It is this concept upon which modern cryptography is
built.</p>I’ve been interested in cryptocurrencies for several years, and recently got into the technical side of them, learning to build on top of Ethereum. As I learned more, my interest in the foundation of cryptocurrencies increased, and so I read a couple of introductions to cryptography - Crypto and The Code Book. Learning about the cryptography throughout the centuries, culminating in information theory and the battle around public key cryptography and a secure internet in the late 20th century, made me want to learn more about the fundamental math and computer science behind the cryptography that enables a secure internet, decentralized blockchains, and self sovereignty. To that end, I decided to work through MIT 6.875 - Foundations of Cryptography which has lectures, papers, and homework all available on GitHub. I created a schedule for myself to work through it in a year, invited anyone to join me, and started a small Discord for discussing the lectures. The course is technical, requiring a background in mathematical proofs and computer science, but the concepts are broadly interesting and relevant to anyone interested in the foundations of secure communication. I’ll be writing a few pieces here about the course material to help myself - what better way to learn than to teach? - and hopefully interest more people in ‘the crypto behind crypto.’Building Openhouse at the ETHGlobal HackMoney Hackathon2021-07-24T00:00:00+00:002021-07-24T00:00:00+00:00https://joepetrich.com/2021/07/24/hack-money-hackathon<p>I’ve been interested in cryptocurrency, and especially Ethereum, for a few
years, but never made the time to learn the technical side of it. I understood
the basics of smart contracts, and the differences between Bitcoin and Ethereum,
but, having followed the Ethereum ecosystem more closely for a few months, I
decided I wanted to dive deeper into Ethereum, smart contracts, and web3 tools.
I got permission from my company to participate, and signed up for the 2021
<a href="https://hackathon.money/">ETHGlobal HackMoney hackathon</a>!</p>
<h2 id="getting-started">Getting started</h2>
<p>ETHGlobal has an interesting sign-up system for their hackathons. In addition to
an application (which seemed designed mainly to weed out bots and low-effort
submissions), upon approval, participants must “stake” a small amount of ETH.
This is done by logging into their site with an Ethereum wallet, and clicking on
a button which prompts you to authorize the transfer of the ETH to the
hackathon’s address. Participants receive their stake back at the end of the
hackathon, as long as they complete required checkins and submit anything at the
end of it. This really appealed to me, despite losing the <a href="https://ethereum.org/en/developers/docs/gas/">gas
cost</a>, because it provided an
incentive to complete the hackathon and not bail out without finishing
something.</p>
<h2 id="finding-a-team">Finding a team</h2>
<p>After signing up, I was able to join the ETHGlobal Discord server, and verify
that I was a participant in the hackathon, again, using my Ethereum wallet.
There were channels for making introductions and finding teams. I joined without
knowing anyone else participating, and had planned on hacking solo unless I
could find a project that was really compelling and willing to take on someone
like me, with no Ethereum programming experience. I was surprised to find many
such teams! Some teams I talked to included the teams behind
<a href="https://www.pods.finance/">Pods</a> and <a href="https://pwn.finance/">PWN</a>.</p>
<p>In my introduction, I mentioned my interest in building identity and
authorization solutions based on Ethereum wallets. While I found these, and
other DeFi projects, interesting, I am personally more interested in the future
of web3, where users control their digital identities completely by identifying
themselves with their private keys, and therefore have the ability to reset
their identities at any point by generating new keys. Projects like
<a href="https://ens.domains/">ENS</a> that make it easy for users to associate
human-readable information with their private key really appeal to me, and I
think the future of the web will rely on systems like that. I hope that I will
one day be able to log in to pay my taxes or manage my healthcare with my keys
secured by my hardware wallet, that I will be able to engage with friends and
strangers on social media by logging in with a browser extension wallet, and
that I might have yet another wallet with a small amount of ETH, Bitcoin, or
stablecoins that replaces my leather wallet full of cards and cash today.</p>
<p>My introduction resonated with another participant, Chris, who had experience
with web3 livestreaming technology, and was interested in building web3
solutions to help facilitate online communities in a decentralized manner.
Through talking with him, we refined our ideas, and agreed to work together
after recruiting Drew, and, eventually, Prabhu, two more software engineers. By
the start of the hackathon we were ready to start building what we called
Openhouse, a video meeting platform with web3 login and access controlled
through
<a href="https://ethereum.org/en/developers/docs/standards/tokens/erc-721/">ERC721</a>
tokens.</p>
<h2 id="hacking-away">Hacking away</h2>
<p>This was my first experience participating in a virtual hackathon, and I wasn’t
sure what to expect. I had participated in in-person hackathons before: 48 hour
endeavors with little time to sleep, let alone think! I found the pace of a 3
week hackathon much more enjoyable, and I appreciated the ability to work with
teammates across many timezones asynchronously. The pace gave me the opportunity
to study Solidity, SvelteKit (the JavaScript framework we used), Jitsi (the
open-source video conferencing software we used), and various other SDKs,
understand them, and make conscious architecture decisions that led to a much
more readable and improvable final product than we could have accomplished in a
48 hour sprint. I’m not sure yet whether we’ll keep working on Openhouse after
the hackathon, but if we do, or anyone else wants to, our GitHub is in a good
state to pick up and keep on hacking.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>Our submission for Openhouse is on the <a href="https://showcase.ethglobal.co/hackmoney2021/openhouse">ETHGlobal showcase
site</a>. At least for now,
you can also play with it at openhouse.joepetrich.com, where it’s live on
Polygon. The code is on <a href="https://github.com/openhouse-project">GitHub</a>, and the
smart contract address on Polygon is
<a href="https://polygonscan.com/address/0x151a051fe0a9414950ef0b34030294ceab6f043a#code">0x151A051FE0a9414950Ef0B34030294cEaB6F043a</a>.</p>
<p>I learned a ton about web3 and Ethereum smart contracts in just thre weeks by
participating in this hackathon, and I’d encourage any web2 developers to give a
future ETHGlobla hackathon a try. By the end of the Hackathon, I had a better
understanding of multisig wallets, Solidity, ERC721 tokens, web3.js, and ENS.
The haze of terminology and only superficially understood technology that
clouded my understanding of web3 before has now been lifted, and I feel
confident to continue developing on top of Ethereum in the future.</p>I’ve been interested in cryptocurrency, and especially Ethereum, for a few years, but never made the time to learn the technical side of it. I understood the basics of smart contracts, and the differences between Bitcoin and Ethereum, but, having followed the Ethereum ecosystem more closely for a few months, I decided I wanted to dive deeper into Ethereum, smart contracts, and web3 tools. I got permission from my company to participate, and signed up for the 2021 ETHGlobal HackMoney hackathon!Code Reviews For All2021-03-07T00:00:00+00:002021-03-07T00:00:00+00:00https://joepetrich.com/2021/03/07/code-reviews<p>A friend of mine remarked recently that they don’t really teach you how to do real software development
in school. Sure, you learn a language, hopefully a bunch of data structures and algorithms, and some
approximation of how computers work. All of this helps when it comes down to writing performant software,
but it’s unlikely that you come out of school knowing how to write readable code, code someone else will
enjoy modifying and maintaining, code that will stand up to users misusing it in ways you didn’t expect.
These skills come through collaboration, and, in my experience, came most quickly through the mentorship
of other software developers, and their reviews of my code.</p>
<p>The code review process can seem onerous, and I understand why some developers dread it. It can feel like
an exam, and, in a field where imposter syndrome runs rampant, it might seem unhealthy to subject
oneself to the criticism, however constructive, of one’s peers. I like to think of code reviews as a sort
of “opposition research,” and I think this helps me to view the process constructively. My colleagues
aren’t trying to pick my code to pieces, or prove me to be incompetent. They’re trying to strengthen my code
by finding its weaknesses, bringing the perspective of an outsider to reveal deficiencies I couldn’t see
myself. What’s more, not only does my code get better, I become a better software developer as I start to
anticipate the questions that reviewers will ask, and I get better at reviewing my own code through having
to perform code reviews myself.</p>
<p>In my first full-time software development job after college, I never had a code review. The pace of the
company was so fast, deadlines so urgent, and business growing so quickly that code reviews, unit testing,
and any sort of automation to ensure code quality were very low on the priority list. I learned a ton
in that job and it helped me grow my career, but the code I wrote wasn’t great. I became biased towards
pushing code quickly, and relying on manual testing to verify the correct behavior. Inheriting a giant
codebase that only grew during my tenure, writing a single unit test didn’t seem worthwhile when we had 0%
code coverage. I bought in fully to the idea that to move fast and deliver products quickly, testing just
had to be left by the wayside until some nonexistent future day would come when our team would have time
to stop feature development and start writing tests.</p>
<p>My perspective changed quickly when I took a new job for a smaller company, with a more complex codebase,
that had upwards of 90% test coverage. At this company, the engineering director fostered a healthy
engineering culture. He ensured the team performed regular code reviews of each other’s code, enforced
a policy of keeping the test coverage up, and went to bat for the development team when there was an
apparent conflict between delivery timelines and code quality. I was initially intimidated to have my
code reviewed regularly, but it didn’t take long for me to realize how quickly my code was improving
through the code review process. Even though we didn’t have code reviews for every change, I began reading
more about general and language-specific best practices, reviewing my own code with a more critical eye,
and discussing designs more with my team instead of jumping into implementation. All of this led me to
become a much better developer, and to deliver much higher quality code.</p>
<p>Now that I’m at Google, I’m surrounded by people who share my view about code reviews. Every change is
reviewed by at least one other person, and my code quality has consistently improved as I’ve gone through
the review process here. I feel very fortunate that I had the mentorship of that engineering director before
coming to Google, and a little bit disappointed I didn’t have that experience right out of college. I’d
like for more software engineers, and especially aspiring and junior engineers, to have the benefit of
code reviews. If you write code and don’t have someone to review it, please reach out
and I’d be happy to review your code for you. I’m not the best software engineer out there, or the most
experienced, but I have reviewed many thousands of lines of code at this point, and I’d be happy to take
a look at yours as well. If you’d like to take me up on this offer, just
<a href="https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/requesting-a-pull-request-review">add me to your pull request</a>
on a public GitHub repo.</p>A friend of mine remarked recently that they don’t really teach you how to do real software development in school. Sure, you learn a language, hopefully a bunch of data structures and algorithms, and some approximation of how computers work. All of this helps when it comes down to writing performant software, but it’s unlikely that you come out of school knowing how to write readable code, code someone else will enjoy modifying and maintaining, code that will stand up to users misusing it in ways you didn’t expect. These skills come through collaboration, and, in my experience, came most quickly through the mentorship of other software developers, and their reviews of my code.A simple email signup and verification service using Google Cloud Functions2021-02-21T00:00:00+00:002021-02-21T00:00:00+00:00https://joepetrich.com/2021/02/21/a-simple-subscription-service<p>As I’ve been working on this website and considering what to share on it, I remembered
the advice of a friend of mine, Zak, in an article he wrote called
<a href="https://zakslayback.com/stay-in-touch/">How to Stay In Touch With Your Network</a>. In it,
he recommends having a personal website and an email list to use when sharing a personal
newsletter. I’ve been reluctant to start sending out a newsletter as I can’t share most
of my work at Google, but I have enjoyed writing here on my own website, and I decided to
build an email list to share the thoughts and projects I put here. As a software engineer,
I thought it would be fun to build my own email signup and verification service instead
of using one of the many free or paid options. This post explains how I did it. If you want
to jump straight to the code, you can
<a href="https://github.com/jpetrich/email-signup-and-verification">check it out on my GitHub here</a>.</p>
<h2 id="choosing-an-architecture">Choosing an architecture</h2>
<p>I wanted to build an email signup service that I wouldn’t have to think about after deploying
it. If, for some reason, my email list grows to the point where I need to worry about scaling
I’ll likely migrate to another solution, so I decided to focus on something that would be
quick to spin up and scale out to dozens or hundreds of emails easily. Given these requirements,
I ruled out hosting my own server, which would require monitoring and consistent maintenance. I’ve
used Google App Engine, and that was my first thought, but then I remembered the simplicity of
AWS Lambda, which lets you run simple functions in a variety of languages without worrying about
even the serving framework. I decided to see if Google Cloud has a similar offering since I don’t
have a personal AWS account, and I found <a href="https://cloud.google.com/functions">Cloud Functions</a>.
Similar to App Engine, Cloud Functions allows using a variety of languages. I chose Node.js because
it’s quick to prototype and I’ve written App Engine apps in Node before.</p>
<p>After picking Cloud Functions for hosting, I needed to figure out how to collect emails, send
a verification email, and accept confirmation of receipt of that email. Collecting is easy - host
a form here on joepetrich.com that POSTs to my Cloud Function. I decided to use Cloud Firestore
for my database as it’s easy to get going and has a high free quota. It would have been just as
reasonable to create a structured database, but I didn’t want to worry about a schema while building
out the prototype. For sending a verification email, I found the
<a href="https://www.npmjs.com/package/nodemailer">nodemailer</a> package which clearly documented how to send
emails through Gmail <a href="https://nodemailer.com/smtp/oauth2/">using OAuth2</a>. For receiving confirmation
of receipt of the verification email, I decided on a second Cloud Function that would take a URL
parameter identifying the user to mark the address as verified. A final feature I haven’t implemented
yet is unsubscribe. I plan on building a third Cloud Function that will be accessible through a unique
link in each email that a user can click on to remove themselves from my email list.</p>
<h2 id="email-verification">Email verification</h2>
<p>Though this is a toy project, I am using it on my website, and wanted to take reasonable steps to prevent
abuse. For this reason, I wanted to make users verify their emails before they’re added to my list, and
I wanted this verification to need to be done through an email they receive, so it will be impossible
for someone to subscribe email addresses they do not own. To do this, I decided to generate a random and
unique string for each email address that subscribes. This string is appended to the verification URL
as a query parameter, and the URL is sent in the verification email. The <code class="language-plaintext highlighter-rouge">emailVerification</code> Cloud
Function checks the verification code against the value in the database and only activates the subscription
if the code in the URL matches. I’m only using random numbers from the <code class="language-plaintext highlighter-rouge">Math.random</code> package, which does not
generate cryptographically sound random numbers, but the code is only used for email subscription, not
authentication, so I was happy to make the tradeoff of security for simplicity.</p>
<h2 id="user-experience">User experience</h2>
<p>Since I’m hosting the signup and verification endpoints using Cloud Functions and my website is hosted
on GitHub pages, hosting the endpoints on the joepetrich.com domain was going to be more difficult
than I thought it was worth. Ideally, the endpoints would be at joepetrich.com/emailSignup and
joepetrich.com/emailVerification, which would allow me to call them from frontend JavaScript on my
website, and either redirect to a confirmation page or simply change content on the home page in response
to the signup request. Since they’re hosted on different domains, the POST request is done through
specifying the <code class="language-plaintext highlighter-rouge">action</code> on the HTML form element itself, so I redirect the user to a “thank you” page by
returning a redirect status code in the response from the Cloud Function. This works to make the user
experience what I want it to be, but limits my flexibility in the future. I’ll probably switch to hosting
these endpoints on joepetrich.com pretty soon.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>What did you think of this article? What could I have improved in building out this email subscription
service? Let me know by sending me an email at joe@petrich.xyz. If you’d like to receive future posts
via email, you can sign up using the service described here by entering your email address in the form
underneath my picture.</p>As I’ve been working on this website and considering what to share on it, I remembered the advice of a friend of mine, Zak, in an article he wrote called How to Stay In Touch With Your Network. In it, he recommends having a personal website and an email list to use when sharing a personal newsletter. I’ve been reluctant to start sending out a newsletter as I can’t share most of my work at Google, but I have enjoyed writing here on my own website, and I decided to build an email list to share the thoughts and projects I put here. As a software engineer, I thought it would be fun to build my own email signup and verification service instead of using one of the many free or paid options. This post explains how I did it. If you want to jump straight to the code, you can check it out on my GitHub here.How I learn new things2021-02-09T00:00:00+00:002021-02-09T00:00:00+00:00https://joepetrich.com/2021/02/09/how-i-learn-something-new<p>In most situations, it is more important to be able to learn something, than to already
know it. When starting most jobs, it is rare that you will be expected to
be adept at everything required for that job on day one. Your success will be measured by
your ability to learn how to perform well in new situations, and to become an expert in
subjects you previously left undiscovered. Even if your job doesn’t require such
adaptability - perhaps you are a surgeon, a pilot, or a skilled tradesman - learning new
things opens up other avenues of opportunity in your personal life through hobbies, shared
interests with friends and family, and financial wellbeing. In this article, I’d like to
share how I think about learning new things, with a few examples that I hope will help you
too.</p>
<h2 id="what-do-i-want-to-learn">What do I want to learn?</h2>
<p>When I identify something I want to learn, it usually takes the form of a problem statement
rather than a direct question. Sometimes the problem statement is open-ended, such as, “How
can I get the skills necessary to be a better leader?” and sometimes it is more focused,
as in, “What’s this ‘decentralized finance’ term I’ve been hearing, and should I invest
time and effort into it?” In the first case, I haven’t narrowed the problem space down
enough to start doing meaningful learning, and in the second, I have a fairly concrete
question that I can answer before directing my search for knowledge further. When I have a
broad problem statement, I like to get a handle on it by asking some more targeted
questions. This helps me identify the right way to work towards an answer, and often leads
me to reject the initial problem statement once I have a better understanding of the space.
For example, if I’ve identified I want to become a better leader, I’d ask myself, “Who are
some people I think of as great leaders, and how did they become great?” “Why do I want to
be a better leader?” “What makes me think I’m not already as good a leader as I need to
be?” The answers to these questions will help me find the right people to talk to, identify
other ways to accomplish my underlying motivation for the initial goal, and narrow down my
objective so I can ask better questions faster.</p>
<h2 id="find-enthusiasts">Find enthusiasts</h2>
<p>When I’ve identified what I want to know, I try to identify people who already know about
it. Sometimes this means having a conversation with someone I know who is already
knowledgeable to get recommendations for books, videos or websites. When I wanted to learn
classical philosophy, I asked a friend’s wife, who majored in philosophy, for book
recommendations, and walked away having resolved to read Plato’s <u>Republic</u>. Usually
though, I don’t have such a great personal connection, and in such
circumstances, I’ve found that online forums are really useful to get a pulse on a topic
and how experts, or at least enthusiasts, approach it. For many topics I’ve wanted to learn
about, including aquariums, 3d printing, mechanical keyboards, whisky, and coffee, Reddit
has been a great introduction. I usually browse recent posts to see if I can get a good
general sense of the quality of discussion, and also look at the top posts of the past
month and year to identify what has excited the community the most recently. As an example,
going to reddit.com/r/coffee today shows that people are interested in grind size
distribution, the merits of various coffee machines and grinders, optimizing coffee taste
from bean storage through brewing, and debugging their espresso machines. Zooming out to
the top posts of the past year reveals interesting questions about working in coffee,
scientific inquiry into how the chemistry of water affects taste, and how the coronavirus
impacts one’s enjoyment of coffee. If you’ve heard of “specialty coffee” and wanted to get
an idea of what that term entails, browsing these few Reddit posts would give you a great
introduction. For other topics, another forum may be more popular. Ham radio, for instance,
has active subreddits, but there is more activity on other forums like qrz.com.</p>
<h2 id="take-a-deep-dive">Take a deep dive</h2>
<p>Sometimes, taking a survey and reading the thoughts
of enthusiasts is enough to give me the context I’m looking for. At this point I might
be satisfied - perhaps I realized I’m not as interested as I thought I would be, or I have
enough information to do the part of my job that required some insight. Often, though,
this broad context whets my appetite for more. Depending on the subject, going deep looks
different - the best and most fundamental material might take the form of textbooks,
literature, or videos. Again, I lean on the community to point me to these resources. Many
online communities maintain wikis filled with links for beginners. I find it useful to
make use of these resources, even if they appear tangential to what I’m trying to learn
both because these resources will point to others, and because it’s useful to reference
them when engaging with the community in the future. Posts like “I read A, B, and C from
the ‘getting started’ guide and had these questions…” will get a much more useful and
positive response than “Tell me about X.”</p>
<h2 id="learn-by-teaching">Learn by teaching</h2>
<p>After a few days of immersing myself in a topic - reading forum posts, articles, and
books, watching videos, and listening to podcasts, I’ll either feel confident in my
newfound knowledge, amazed at how much there still is to learn, or befuddled by the
variety of conflicting things I’ve read. To clarify my own thoughts and distill them into
coherency, I find it very helpful to teach someone else. As Flannery O’Connor said, “I
write because I don’t know what I think until I read what I say.” Whether it’s through
writing or talking about it, outputting what I’ve learned helps me identify what I’d bet
on and what’s still rolling around in my head trying to find its resting place. Another benefit of sharing is getting someone else’s thoughts and questions. When I’ve discussed
topics I’m starting to learn about with other novices, I’ve received great fundamental
questions that help me refocus my attention on basic understanding, and when I’ve
had conversations with subject-matter experts after learning the fundamentals, I’ve been
able to understand their insights, accelerating my learning process.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>What did you think of this article? Do you approach learning in a similar way? Let me know
by sending me an email at joe@petrich.xyz.</p>In most situations, it is more important to be able to learn something, than to already know it. When starting most jobs, it is rare that you will be expected to be adept at everything required for that job on day one. Your success will be measured by your ability to learn how to perform well in new situations, and to become an expert in subjects you previously left undiscovered. Even if your job doesn’t require such adaptability - perhaps you are a surgeon, a pilot, or a skilled tradesman - learning new things opens up other avenues of opportunity in your personal life through hobbies, shared interests with friends and family, and financial wellbeing. In this article, I’d like to share how I think about learning new things, with a few examples that I hope will help you too.Building A Prusa i3 MK3S+ 3D Printer2021-02-04T00:00:00+00:002021-02-04T00:00:00+00:00https://joepetrich.com/2021/02/04/building-a-prusa-mk3s<p>Before the pandemic, I enjoyed using the 3d printers at my office. I wasn’t very adept with
them, mainly printing models I found online with recommended settings, but I still found it
interesting and was starting to learn 3d modeling and more advanced printing techniques.
With no end in sight to work from home, I decided to purchase a 3d printer to use at home.
I decided on the <a href="https://shop.prusa3d.com/en/51-original-prusa-i3-mk3s">Original Prusa i3 MK3S+</a>
which is available pre-built or as a kit (at a substantial discount). I decided since the
point of getting a 3d printer was to have fun with it as a hobby, I would build it myself,
and I couldn’t be happier with that decision. After the kit arrived, I built it over a week
of evenings. I took video of every step for my own reference - I wanted to be able to
refer to them to find my mistake if the printer didn’t pass its self test, but decided to
post them to YouTube in case they would be helpful to any other new Prusa owners.</p>
<p>In addition to the step by step videos you can find <a href="https://www.youtube.com/playlist?list=PLJBQMpoFCtCFGM209no83YHeww7uSbnJG">here</a>, I uploaded a time-lapse of
each major stage of assembly:</p>
<h3 id="y-axis">Y-Axis</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/lYoCCvviWxo" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="x-axis">X-Axis</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/9CfTvJmVVlQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="z-axis">Z-Axis</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/z0aiN9JEPLc" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="e-axis">E-Axis</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/VTYIt1g0Mnw" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="lcd">LCD</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/zjDBGCTx2IE" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="heatbed-and-psu">Heatbed and PSU</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Qe1Mu2zc9BY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="electronics">Electronics</h3>
<iframe width="560" height="315" src="https://www.youtube.com/embed/PozWZpF4XKU" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h3 id="final-thoughts">Final thoughts</h3>
<p>Do you have a 3d printer? Any tips or tricks to share? Please shoot me an email at
joe@petrich.xyz. I’d love to hear from you!</p>Before the pandemic, I enjoyed using the 3d printers at my office. I wasn’t very adept with them, mainly printing models I found online with recommended settings, but I still found it interesting and was starting to learn 3d modeling and more advanced printing techniques. With no end in sight to work from home, I decided to purchase a 3d printer to use at home. I decided on the Original Prusa i3 MK3S+ which is available pre-built or as a kit (at a substantial discount). I decided since the point of getting a 3d printer was to have fun with it as a hobby, I would build it myself, and I couldn’t be happier with that decision. After the kit arrived, I built it over a week of evenings. I took video of every step for my own reference - I wanted to be able to refer to them to find my mistake if the printer didn’t pass its self test, but decided to post them to YouTube in case they would be helpful to any other new Prusa owners.Unboxing a new Model F Keyboard2021-01-19T00:00:00+00:002021-01-19T00:00:00+00:00https://joepetrich.com/2021/01/19/model-f-unboxing<p>Update 01-30-2021: Originally, I incorrectly asserted that the keyboards would ship with the keys
pre-installed when the dye sublimated ones are ready. This was incorrect, and the article has been
updated to reflect that.</p>
<p>In the summer of 2017, I heard of an intriguing project to reproduce a classic and well-loved keyboard,
the IBM Model F. The Model F is known for its unique keypress action - the buckling spring. Unlike most
mechanical keyboard switches, the switch in a buckling spring keyboard gives the spring in each key
room to buckle when it is depressed, leading to a very tactile experience. I took a look at the
<a href="https://modelfkeyboards.com">Model F Labs</a> website and decided to support the project by pre-ordering
an F77.</p>
<p>The astute reader will realize that this post, written upon receiving my F77, was published in 2021,
nearly four years after placing that preorder, and might expect my opinion of the project to be soured
by the long wait. On the contrary, following the project over the past few years has been
extraordinarily interesting, and has greatly added to the enjoyment I now experience from my new
keyboard. Clearly, others have felt the same way, with over 2,000 people supporting the project and
over $1.2M in sales. Rather than rehashing the backstory behind the project, the design and production
challenges, and advantages of the Model F design, I’ll link to the
<a href="https://modelfkeyboards.com">website</a>, <a href="https://www.modelfkeyboards.com/blog/">blog</a>, and
<a href="https://www.popularmechanics.com/technology/gadgets/a27123/model-f-project-buckling-spring-keyboard/">Popular Mechanics article</a>
about the project, while sharing some pictures and a video of the keyboard in action below.</p>
<p>Note: I was given the option to have my keyboard delivered early without dye-sublimated keys
as the key printing was delayed for a number of reasons. The pictures below show the F77 keyboard
with blank keys which I purchased to use until the dye-sublimated ones are ready later this year.</p>
<p>The keyboard was shipped in a plastic bag that protected the box and packing slip well.</p>
<p><img src="/assets/modelf/box.jpg" alt="top-down view of box" />
<img src="/assets/modelf/box_side.jpg" alt="side view of box" /></p>
<p>The box had a label on it indicating the serial number along with the neat drawing of a buckling spring.
<img src="/assets/modelf/box_label.jpg" alt="outside label" /></p>
<p>Opening the box revealed perfectly-fitting styrofoam enclosing the keyboard.
<img src="/assets/modelf/open_box.jpg" alt="open box" /></p>
<p>The attention to detail shows even in the packaging, as this embossed F77 on the styrofoam demonstrates.
<img src="/assets/modelf/styrofoam_close.jpg" alt="styrofoam F77" /></p>
<p>Outside the styrofoam was an invoice, printed appropriately on pinfeed computer paper.
<img src="/assets/modelf/invoice.jpg" alt="invoice" /></p>
<p>Opening the styrofoam, the keyboard, inserts, and cork feet were packed neatly inside. My keyboard came
without keys installed since the dye sublimated keys weren’t ready and I opted to receive the keyboard
early, ordering a set of blank keys to use in the meantime. My current keyboard is a DAS Model S with
blank keys, so I didn’t mind not having the symbols printed.
<img src="/assets/modelf/open_top.jpg" alt="open box from above" />
<img src="/assets/modelf/open_angle.jpg" alt="open box from an angle" /></p>
<p>The keyboard itself has a quality engraved plate on the bottom of it, with its serial number and date
of manufacture.
<img src="/assets/modelf/number_plate.jpg" alt="plate" /></p>
<p>Installing the keys is not a particularly quick process, but there are lots of helpful tips on the
project website. The basic process includes holding the keyboard vertically, with the spacebar up,
pressing each key into its slot, ensuring it depresses with a compelling motion and noise, and then
moving onto the next key until all the keys are seated. It’s not a difficult process, and since the
keyboard is designed to last practically forever, it’s a process that would behoove any Model F
keyboard owner to get comfortable with.</p>
<p><img src="/assets/modelf/keyboard_top.jpg" alt="assembled keyboard" />
<img src="/assets/modelf/keyboard_angle.jpg" alt="assembled keyboard at an angle" /></p>
<p>What does the assembled keyboard sound like?</p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/u-SDvrAQU84" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Update 01-30-2021: Originally, I incorrectly asserted that the keyboards would ship with the keys pre-installed when the dye sublimated ones are ready. This was incorrect, and the article has been updated to reflect that.How to debug2021-01-11T00:00:00+00:002021-01-11T00:00:00+00:00https://joepetrich.com/2021/01/11/how-i-debug<p>Debugging isn’t just a skill for software developers and technical support - it’s a mindset for problem solving
that is applicable in many spheres. In this article, I draw on my experience debugging production issues to give
you a framework for debugging. While I find it easiest to write about in the context of production software, the
methodology is also applicable to general problem solving, and I hope readers outside of software will appreciate
this way of thinking.</p>
<h2 id="triage">Triage</h2>
<p>Whether you’re on call for a production service when a page comes in or you receive a call from a friend asking you
to help them “fix their internet,” the first step in
debugging is to assess the issue and estimate its impact. This means answering two main questions:</p>
<ol>
<li>Who is affected?</li>
<li>What is the impact on affected users?</li>
</ol>
<p>For a production service, users could be internal developers, partners, or end users of your app or website. It’s
important to identify if <em>all</em> users are affected, or a particular subset is. For example:</p>
<ul>
<li>Developers hitting the pre-production API from the Eastern US</li>
<li>New users signing up</li>
<li>All users in Australia</li>
</ul>
<p>Your ability to refine the description of this subset will be limited by the monitoring you have set up for
your service, whether that’s through your cloud provider or manually created systems. The priority of solving an
issue in production should be influenced by the number of users it affects, and who those users are.</p>
<p>The impact to affected users goes hand in hand with the number and kind of user affected when determining the
issue’s severity and priority. When looking to assess impact, you can look at it from a service perspective or a
user perspective. From a service perspective, you want to ask about the type of traffic that is affected, and to
what degree. Some things to look at include error rates and latency across API methods, servers, clusters, or
data centers. Just like how you identified a subset of users that are impacted by the issue, here you describe
the subset of your service that is affected. For example:</p>
<ul>
<li>Inbound GET calls to the /profilePicture API are experiencing 100x latency compared to yesterday</li>
<li>5% of DELETE calls to /posts/id are returning a 500 error</li>
<li>All calls to all APIs routed to your server in New Jersey are timing out</li>
</ul>
<p>From a user perspective, assessing impact means identifying which user journeys aren’t working as intended, and to
what degree. If your issue is user-reported, this means looking for details like “<em>every time</em> I try X, Y happens” or “Z didn’t work <em>but I was still able to sign up</em>.” Even if your investigation of impact started by investigating your service, you should figure out how users are affected to appropriately assign severity and prioritize the issue. Some examples of user impact (combined with the subsets identified above) include:</p>
<ul>
<li>Internal developers are getting a 500 error when calling the pre-production /healthCheck endpoint</li>
<li>New users are unable to continue past picking their favorite widget in the signup flow</li>
<li>All users in Australia are unable to see other users’ profile pictures</li>
</ul>
<p>When you combine the answers to the above questions - getting the “who” and the “what” - you can assign a severity
to the issue and prioritize it. More or higher-value users, at a high-rate or in an experience-breaking way, on
important user journeys means a higher severity. Fewer, lower-value users, at low rates, in inconvenient ways, on
less key user journeys means a lower severity.</p>
<p>In a general sense, when triaging you’re attempting to boil the issue down to its simplest description while also
measuring the impact. If you’re debugging your friend’s internet, triaging the issue would mean finding out what
they mean when they say the internet “is not working.” For example, what are they trying to do on the internet? Are
all apps and websites not working? What about other devices they have on the same internet connection? What about other people in their house? If you don’t answer these questions first, trying to solve their problem is going to
be incredibly inefficient because you don’t actually know what problem you’re trying to solve. Even telling them to “turn it off and on again” is bad advice if what they meant by “is not working” is that a tree came down on their fiber line outside.</p>
<h2 id="mitigate">Mitigate</h2>
<p>Once you’ve assessed the issue’s severity, move on to mitigation. The point of mitigation is not to solve the issue
forever, but to reduce or eliminate the impact of the issue. Some mitigation strategies include routing traffic
away from a single server that has high latency, rolling back a change that is correlated with an increase in
error rates, stopping a batch process that isn’t critical to the end user experience when your servers are at
capacity, and disabling a broken feature so that users don’t encounter the issue.</p>
<p>Beyond lessening the impact of an issue much more quickly than implementing the perfect long-term fix could,
mitigation allows you to think through what the “real” solution should be without the pressure of a user-facing
outage. In the morning after a now-mitigated outage, you might realize that the code change you rolled back as a
mitigation, instead of fixing as a solution, was submitted by mistake, or that the batch process you could have
purchased more capacity for but instead you paused, had been hacked and was mining altcoins instead of training
your ML models.</p>
<p>Outside the production world, mitigation is also a great debugging strategy. If your friend’s internet is broken,
you might invite them over to use your internet while you debug it together, temporarily fixing their problem,
while also making it more likely they’ll be able to fix their own internet next time. Or, you might suggest they
use their phone as a hotspot so you can help them tomorrow when you’re less busy. Both of these are legitimate mitigation strategies!</p>
<h2 id="learn">Learn</h2>
<p>Now that users aren’t affected, you can move on to designing a long-term solution, putting systems in place to
ensure the issue doesn’t recur, and reflecting on how the issue could have been avoided in the first place. These
ideas are often included in a postmortem or retrospective that should be blameless and constructive. As a person
who worked on triaging and mitigating the issue, you should document how you did so, what went well, and what went
poorly - either by good/bad design, or by luck. For example:</p>
<ul>
<li>It took 3 hours to realize this issue was only happening in our QA environment because our logs are mixed</li>
<li>It was clear 5 minutes after I received the alert that only Europe was affected because we have useful graphs that show traffic by region</li>
<li>Rolling back the change alleviated the issue, but not because of a breaking change - it turns out the networking on our servers bugs out after 1024 hours and the restart required to roll back the change reset the clock!</li>
</ul>
<p>It’s important that a postmortem clearly outline how to prevent the same issue from happening again, and you
should make it a goal to never have to write the same one twice. Ideally, your postmortem not only results in
work that prevents the same exact problem happening again, but also makes similar problems easier to debug in the
future. As such, your postmortem should not be a secret - share it with everyone who works on and is on-call for
your service. At the least, it will provide a reference for the next person to encounter a similar issue.</p>
<h2 id="final-thoughts">Final thoughts</h2>
<p>What did you think of this article? Do you follow a similar thought process when doing any sort of debugging? How do you think these steps might be applicable outside the software world? Let me know by sending me an email at joe@petrich.xyz.</p>Debugging isn’t just a skill for software developers and technical support - it’s a mindset for problem solving that is applicable in many spheres. In this article, I draw on my experience debugging production issues to give you a framework for debugging. While I find it easiest to write about in the context of production software, the methodology is also applicable to general problem solving, and I hope readers outside of software will appreciate this way of thinking.