• 1 Post
  • 29 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • To start, let’s verify that Bluesky the app is indeed open-source. Yep, it is. But that’s not the same as having all the machinery be open-source, where anyone could spin up their own, compatible service; maybe named ExampleSky. To be compatible, ExampleSky would need to use the same backend interface – aka protocol – as Bluesky, which is known as ATProto. The equivalent (and older) protocol behind Mastodon and Lemmy is ActivityPub.

    ATProto is ostensibly open-source, but some argue that it’s more akin to “source available” because only the Bluesky parent company makes changes or extensions to the protocol. Any alternative implementation would be playing a game of chase, for future versions of the protocol. History shows that this is a real risk.

    On the flip side, Mike Masnick – founder of Techdirt, author of the 2019 paper advocating for “protocols, not platforms” that inspired Bluesky, and recently added member of the board of Bluesky, replacing Jack Dorsey – argues that the core ability to create a separate “Bluesky2” is where the strength of the protocol lays. My understanding is that this would act as a hedge to prevent Bluesky1 from becoming so undesirable that forking to Bluesky2 is more agreeable. To me, this is no different than a FOSS project (eg OpenOffice) being so disagreeable that all the devs and users fork the project and leave (eg LibreOffice).

    But why a common protocol? As Masnick’s paper argues, and IMO in full agreement with what ActivityPub has been aiming towards for years, is that protocols allow for being platform-agnostic. Mastodon users are keenly aware that if they don’t like their home instance, they can switch. Sure, you’ll have to link to your new location, but it’s identical to changing email providers. In fact, email is one of the few protocol-agnostic systems in the Internet still in continued use. Imagine if somehow Gmail users couldn’t send mail to Outlook users. It’d be awful.

    Necessarily, both ActivityPub and ATProto incorporate decentralization in their designs, but in different fashions. ActivityPub can be described as coarse decentralization, as every instance is a standalone island that can choose to – and usually does – federate with other instances. But at the moment, core features like registration, login, or rate limiting, or spam monitoring, are all per-instance. And as it stands, much of those involve a human, meaning that scaling is harder. But the ActivityPub design suggests that instances shouldn’t be large anyway, so perhaps that’s not too big an issue.

    ATProto takes the fine-grained design approach where each feature is modular, and thus can be centralized, farmed out, or outright decentralized. Now, at this moment, that’s a design goal rather than reality, as ATProto has only existed for so many years. I think it’s correct to say for now that Bluesky is potentially decentralizable, in the coarse sense like how Mastodon and Lemmy are.

    There are parts of the Bluesky platform – as in, the one the Bluesky organization runs – which definitely have humans involved, like the Trust and Safety team. Though compared to the total dismantlement of the Twitter T&S team and the resulting chaos, it may be refreshing to know that Bluesky has a functional team.

    A long term goal for Bluesky is the “farming out” of things like blocklists or algorithms. That is to say, imagine if you wanted to automatically duplicate the blocks that your friend uses, because what she finds objectionable (eg Nazis) probably matches your own sensibilities, then you can. In fact, at this very moment, I’m informed that Bluesky users can subscribe to a List and implement a block against all members of the List. A List need not be just users, but can also include keywords, hashtags, or any other conceivable characteristic. Lists can also be user-curated, curated by crowd sourcing, or algorithmically generated. The latter is the long goal, not entirely implemented yet. Another example of curation is “Starter Packs”, a List of specific users grouped by some common interest, eg Lawsky (for lawyers). Unlike a blocklist which you’d want to be updated automatically, a Starter List is a one-time event to help fill your feed with interesting content, rather than algorithmic random garbage.

    So what’s wrong with Bluesky then? It sounds quite nice so far. And I’m poised to agree, but there’s some history to unpack. In very recent news, Bluesky the organization received more venture capital money, which means it’s worth mentioning what their long term business plan is. In a lot of ways, the stated business plan matches what Discord has been doing: higher quality media uploads and customizations to one’s profile. The same statement immediately ruled out any sort of algorithmic upranking or “blue checks”; basically all the ails of modern Twitter. You might choose to take them at their word, or not. Personally, I see it as a race between: 1) ATProto and the Bluesky infra being fully decentralized to allow anyone to spin up ExampleSky, and 2) a potential future enshittification of Bluesky in service of the venture capitalists wanting some ROI.

    If scenario 1 happens first, then everyone wins, as bridging between ActivityPub and ATProto would make leaps and bounds, and anyone who wants their own ATProto instance can do so, choosing whether they want to rely on Bluesky for any/all features or none at all. Composability of features is something that ATProto can meaningfully contribute to the protocol space, as it’s a tough nut to crack. Imagine running your own ATProto instance but still falling back on the T&S team at Bluesky, or leveraging their spam filters.

    But if scenario 2 happen first, then we basically have a Twitter2 cesspool. And users will once again have to jump ship. I’m cautiously hopeful that the smart cookies at Bluesky can avoid this fate. I don’t personally use Bluesky, being perfectly comfortable in the Fediverse. But I can’t deny that for a non-tech oriented audience, Bluesky is probably what I’d recommend, and to opt-in to bridging with the Fediverse. Supposed episodes of “hyping” don’t really ring true to me, but like I said, I’m not currently an invested user of Bluesky.

    What I do want to see is the end result of Masnick’s paper, where the Internet hews closer to its roots where interoperability was the paramount goal, and the walled gardens of yore waste away. If ATProto and ActivityPub both find their place in the future, then IMO, it’ll be no different than IMAP vs POP3.


  • Pew Research has survey data germane to this question. As it stands, a clear majority (79%) of opposite-sex married women changed their family/last name to their husband’s.

    But for never-married women, only a third (33%) said they would change their name to their spouse’s family name. 24% of never-married women were unsure whether they would or wouldn’t change their name upon marriage.

    From this data, I would conclude that while the trend of taking the husband’s last name is fairly entrenched right now, the public’s attitude are changing and we might expect the popularity of this to diminish over time. The detailed breakdown by demographic shows that the practice was less common (73%) in the 18-49 age group than in the 50+ age group (85%).

    Pew Research name change data

    However, some caveats: the survey questions did not inquire into whether the never-married women intended on ever getting married; it simply asked “if you were to get married…”. So if marriage as a form of cohabitation becomes less popular in the future, then the change-your-family-name trend could be in sharper decline than this data would suggest.

    Alternatively, the data could reflect differences between married and never-married women. Perhaps never-married women – by virtue of not being married yet – answered “would not change name” because they did not yet know what their future spouse’s name is. No option for “it depends on his name” was offered by the survey. Never-married women may also more-strongly consider the paperwork burden – USA specific – for changing one’s name.

    So does this help answer your question? Eh, only somewhat. Younger age and left-leaning seem to be factors, but that’s a far cry from cause-and-effect. Given how gradual the trend is changing, it’s more likely that the practice is mostly cultural. If so, then the answer to “why is cultural practice XYZ a thing?” is always “because it is”.


  • I am a software engineer by trade, so when I started cooking, everything and every tool was intimidating, because I had no idea how it worked nor what it was meant for. I knew nothing about knives besides not to drop one, didn’t know the difference between a wok and a skillet, and didn’t understand how oil creates a non-stick surface on a non-non-stick pan.

    What helped me was a book that wasn’t like a recipe or cook book, but something closer to a food and kitchen textbook. The Food Lab by Kenji Lopez-Alt goes into some excruciatingly scientific detail about the role of different kitchen implements, and then showcasing recipes that apply theory to practice. Each step in the recipes thoroughly describe what to do, and the author puts a lot of content onto his YouTube channel as well.

    It was this book that convinced me to buy, strip, and season a cast iron pan, which has already proven its worth as a non-sticking vessel comparable to my old Teflon-coated pans. And I think for you, reading the theory and following some of the recipes might develop sufficient experience to at least be comfortable in an active kitchen. It’s very much a chicken-and-egg problem – if you’ll pardon the poultry pun – but this book might be enough to make progress in the kitchen.

    Also, since it was published in 2015, it’s very likely available at your local library, so check there first before spending money to buy the book. Good luck with your culinary development!



  • While technically correct, I feel like the headline should have mentioned that these two new models are aimed at the off-road market. As in, electric dirtbikes.

    And while I am indeed thrilled that prices are pushing downward, I’m still not sure if $4200 is going to substantially move product, at least not without convincing buyers of the unique benefits with going electric. A cursory web search shows that 125cc dirt bikes are in the 7 kW class, but can be bought new for $3500. So the gap is definitely closing, but it’s still notable.

    I do wonder if they plan to go even smaller, into the 3-4 kW class, which would roughly be the realm of 50cc or 80cc. That would definitely be an off-road only category, and is more atuned for kids. Or perhaps adults wishing to leisurely cruise around dirt tracks. It’s also a category where low-duty cycle (ie one season only) and short range are most common, and the immediate benefit of electric is not having to stabilize two-stroke fuel over the winter. An electric dirtbike that can sit in a shed but ready to use when pulled out three times a year, is the sort of product that suburban buyers might appreciate.




  • In summary, Denver’s ebike rebate experiment was inspired by utility rebates from other regions, was stupendously successful, flattered by emulation in other jurisdictions and the State of Colorado itself, to the point that the city might recast its program to equitably incentivize low-income riders, as well as focusing on other barriers to riding, such as poor infrastructure. The experiment has paid off, and that’s before considering the small business boost to local bike shops and expanding the use of ebikes for transportation in addition to recreation.

    With that all said, I want to comment about the purported study which concluded that ebike rebate programs are less economically efficient than electric automobile rebates. Or I would, if the study PDF wasn’t trapped behind Elsevier’s paywall. I suppose I could email the author to ask for a copy directly.

    But from the abstract, the authors looked to existing studies which originally suggested that ebike rebates are less efficient, so I found a list of that study’s citations, identifying two which could be relevant:

    The first study looked at ebikes in England – not the whole UK – and their potential to displace automobile trips, thus reducing overall CO2 emissions. It concluded that increased ebike uptake would produce emissions savings faster than waiting for average automobile emissions to reduce, or from reductions in driving by other means, as a means to slow the climate disaster. This study does not analyze the long-term expected emissions reduction compared to cars, but did conclude that ebikes would produce the most savings in rural areas, as denser cities are already amenable to acoustic cycling and public transport.

    The second study looked at a year of how new ebike owners changed their travel behavior, for participants from three California jurisdictions offering incentives, two in the San Francisco Bay Area and one along the North Coast. The study concluded that in the first few months, most riders used their ebike 1-3 times per week, but towards the end of the study period, most riders reduced their use, although the final rate was still higher than the national average rate for acoustic bicycling. The study found that at its peak, ebikes replaced just a hair above 50% of trips, and thus concluded that the emissions saved by displacing automobile trips was not as cost effective as emissions reduced through EV automobile incentives. They computed the dollar-per-co2-ton for each mode of transportation.

    So it would seem that the original study looked to this second study and reached a similar conclusion. However, the second study noted that their data has the caveat of being obtained from 2021 to 2022, when the global pandemic pushed bicycling into the spotlight as a means of leaving one’s house for safe recreation. It would not be a surprise then that automobile trips were not displaced, since recreational bicycle rides don’t compete with driving a car from point A to point B for transportation.

    Essentially, it seems that the uncertainty in emissions reduction is rooted in variability as to whether ebikes are used mostly for recreation, or mostly for displacing car trips. But as all the studies note, ebikes have a host of other intangible benefits.

    IMO, it would be unwise to read only the economic or emissions conclusion as a dismissal of ebikes or ebike rebates. Instead, the economics can be boosted by focusing resources for rural or poorer riders who do not have non-automobile options, and the emissions savings can be bolstered by making it easier/safer to ride. Basically, exactly what Denver is now doing.


  • If you were to properly consider the problem the actual cost would be determined by cost per distance traveled and you essentially decide the distance by which ever you are budgeted for.

    I wrote my comment in response to the question, and IMO, I did it justice by listing the various considerations that would arise, in the order which seemed most logical to me. At no point did I believe I was writing a design manual for how to approach such a project.

    There are much smarter people than me with far more sector-specific knowledge to “properly consider the problem” but if you expected a feasibility study from me, then I’m sorry to disappoint. My answer, quite frankly, barely arises to a back-of-the-envelope level, the sort of answer that I could give if asked the same question in an elevator car.

    I never specified that California would be the best place to implement this process.

    While the word California didn’t show up in the question, it’s hard to imagine a “state on the coast” with “excess solar” where desalination would be remotely beneficial. 30 US States have coastlines, but the Great Lakes region and the Eastern Seaboard are already humid and wet, with rivers and tributaries that aren’t exactly in a drought condition. That leaves the three West Coast states, but Oregon and Washington are fairly well-supplied with water in the PNW. That kinda leaves California, unless we’re talking about Mexican states.

    I’m not dissing on the concept of desalination. But the literature for existing desalination plant around the world showcases the numerous challenges beyond just the money. Places like Israel and Saudi Arabia have desalination plants out of necessity, but the operational difficulties are substantial. Regular clogging of inlet pipes by sealife is a regular occurrence, disposal of the brine/salt extracted is ecologically tricky, energy costs, and more. And then to throw pumped hydro into this project would make it a substantial undertaking, as dams of any significant volume are always serious endeavors.

    At this point, I feel the question is approaching pie-in-the-sky levels of applicability, so I’m not sure what else I can say.


  • I’m not a water or energy expert, but I have occasionally paid attention to the California ISO’s insightful – while perhaps somewhat dry – blog. This is the grid operator that coined the term “duck curve” to describe the abundance of solar energy available on the grid during the daylight hours, above what energy is being demanded during those hours.

    So yes, there is indeed an abundance of solar power during the daytime, for much of the year in California. But the question then moves to: where is this power available?

    For reference, the California ISO manages the state-wide grid, but not all of California is tied to the grid. Some regions like the Sacramento and Los Angeles areas have their own systems which are tied in, but those interconnections are not sufficient to import all the necessary electricity into those regions; local generation is still required.

    To access the bulk of this abundant power would likely require high-voltage transmission lines, which PG&E (the state’s largest generator and transmission operator) operates, as well as some other lines owned by other entities. By and large, building a new line is a 10+ year endeavor, but plenty of these lines meet up at strategic locations around the state, especially near major energy markets (SF Bay, LA, San Diego) and major energy consumers (San Joaquin River Delta pumping station, the pumping station near the Grapevine south of Bakersfield).

    But water desalination isn’t just a regular energy consumer. A desalination plant requires access to salt water and to a freshwater river or basin to discharge. That drastically limits options to coastal locations, or long-distance piping of salt water to the plant.

    The latter is difficult because of the corrosion that salt water causes; it would be nearly unsustainable to maintain a pipe for distances beyond maybe 100 km, and that’s pushing it. The coastal option would require land – which is expensive – and has implications for just being near the sea. But setting aside the regulatory/zoning issues, we still have another problem: how to pump water upstream.

    Necessarily, the sea is where freshwater rivers drain to. So a desalination plant by the ocean would have to send freshwater back up stream. This would increase the energy costs from exorbitant to astronomical, and at that point, we could have found a different use for the excess solar, like storing it in hydrogen or batteries for later consumption.

    But as a last thought experiment, suppose we put the plant right in the middle of the San Joaquin River Delta, where the SF Bay’s salt water meets the Sacramento River’s freshwater. This area is already water-depreased, due to diversions of water to agriculture, leading to the endangerment of federally protected species. Pumping freshwater into here could raise the supply, but that water might be too clean: marine life requires the right mix of water to minerals, and desalinated water doesn’t tend to have the latter.

    So it would still be a bad option there, even though power, salt water, and freshwater access are present. Anywhere else in the state is missing at least one of those three criteria.


  • My initial reaction was “this cannot work”. So I looked at their website, which is mostly puffery and other flowery language. But to their credit, they’ve got two studies, err papers, err preprints, uh PDFs, one of which describes their validation of their product against wind tunnel results.

    In brief, the theory of operation is that there’s a force sensor at each part where the rider meets the bike: handlebars, saddle, and pedals. Because Newton’s Third Law of Motion requires that aerodynamic forces on the rider must be fully transfered to the bike – or else the rider is separating from the bike – the forces on these sensors will total to the overall aerodynamic forces acting on the rider.

    From a theoretical perspective, this is actually sound, and would detect aero forces from any direction, regardless of if it’s caused by clothes (eg a hoodie flailing in the air) or a cross-wind. It does require an assumption that the rider not contact any other parts of the bike, which is reasonable for racing bikes.

    But the practical issue is that while aero forces are totalized with this method, it provides zero insight into where the forces are being generated from. This makes it hard to determine what rider position will optimize airflow for a given condition. To make aero improvements like this becomes a game of guess-and-check. Whereas in a wind tunnel, identifying zones of turbulent air is fairly easy, using – among other things – smoke to see how the air travels around the rider. The magnitude of the turbulent regions can then be quantified individually, which helps paint a picture of where improvements can be made.

    For that reason alone, this is not at all a “wind tunnel killer”. It can certainly still find use, since it yields in-field measurements that can complement laboratory data. Though I’m skeptical about how a rider would even respond if given real-time info about their body’s current aerodynamic drag. Should they start tacking side to side? Tuck further in?

    More data can be useful, but one of the unfortunate trends from the Big Data explosion is the assumption that more data is always useful. If that were true, everyone would always be advised to undergo every preventative medical diagnostics annually, irrespective of risk. Whereas the current reality is that overdiagnosis is a real problem now precisely because some doctors and patients are caught in that false assumption.

    My conclusion: technically feasible but seems gimmicky.


  • “Not everybody can use a bike to get around — these are some of our major arterial roads, whether it is Bloor, University or Yonge Street — people need to get to and from work,” Sarkaria said.

    This is some exasperatingly bad logic from the provincial Transport Minister. The idea that biking should be disqualified because the infrastructure cannot magically enable every single person to start biking is nonsense. By the same “logic”, the provincial freeways should be closed down because not everyone can drive a car. And then there’s some drivel about bike lanes contributing to gridlock, which is nonsense in the original meaning and disproven in the colloquial meaning.

    It is beyond the pale that provincial policy will impose a ceiling on what a municipality can do with its locally-managed roads. At least here in America, a US State would impose only a floor and cities would build up from there. Such minimums include things like driving on the right and how speed limits are computed. But if a USA city or county aspires for greatness, there is no general rule against upgrading a road to an expressway, or closing a downtown street to become fully pedestrianized.

    How can it be that Ontario policy will slide further backwards than that of US States?


  • My literacy of the German language is almost nil, but it seems patently unreasonable for an author or journalist to believe that over half of the incidents involving a fairly common activity would be fatal. Now, I should say that I’m basing this on prior knowledge of the German e-bike/pedelec market, where over half the bikes sold there at electric. What this implies is that of the sizable population of the country, of the subset which are riding bicycles, and further the subset which ride pedelecs, and still yet the subset which get into a collision or other incident, that somehow it’s believable that over half will die?

    That cannot possibly be true, does not pass the sniff test, and isn’t even passable as a joke. If it were true, there would be scores of dead riders left and right, in every city in the country, daily. I suspect it would overtake (pun intended) the number of murders in the fairly safe country.

    Compare this with parachuting, which would be more sensible for a headline of “most accidents are fatal”, I’m shocked that no one in the publication chain of command noticed such a gross error. While it’s true that some statistics are bona fide shocking – American shooting deaths come to mind – this is a very bizarre instance of confirmation bias, since no one noticed the error.

    I was led to believe that cycling in German is “normalized but marginalized”, but this type of error speaks to some journalistic malpractice.




  • For the historical questions, I don’t really have answers, especially where it involves departures from the Western world. I did, however, briefly touch up on Islamic banking, which I’ve always found intriguing as the Islamic faith does not permit charging interest on loans, viewing it as usurious. I’m informed that Christianity also had a similar prohibition on usury, but apparently it fell due to the need to fund the constant wars in Europe.

    I’m not really seeing the difference in feudalism except a members only kind of participation with a crony pool of inbreds, not all that different than the billionaires of today.

    I think the important distinction insofar as stock markets is that the crony pool of inbreds have access, but so too does the commoner. Well, the middle-class commoner usually. And we’ve seen David-vs-Goliath cases where the commoners put up a decent fight against the inbreds’ institutions; the whole GameStonk fiasco comes to mind. An equivalent economic upset would have been wholly impossible at any point during any feudal period in history.

    What are the idealist or futurist potential alternatives between the present and a future where wealth is no longer the primary means of complex social hierarchical display? My premise is that basing hierarchical display on the fundamental means of human survival is barbaric primitivism.

    From conversations I’ve had previously, possible answers to that question are presented in the works of Paul Cockshott, author of Towards A New Socialism. I’ve not read it, but friends in Marxist-Leninist parties have mentioned it. The Wikipedia page, however, notes that it’s an economics book, which could be fairly technical and difficult to read. Sort of like how Das Kapital is more-or-less a textbook, in contrast with how Wage Labour And Capital was meant for mass consumption.

    Wealth extraction neglects the responsibility of the environment and long term planning.

    True. The cost to the environment is not “internalized”, to use the technical term. Hence, it doesn’t need to be paid for, and is thus “free real estate”. Solutions to internalize environmental harm include carbon taxes or cap-and-trade. But the latter is a lukewarm carbon tax because it only looks at the end-result emissions, rather than taxing at the oil well, so to speak.

    I’m curious how humanity evolves in a distant post scarcity future but without becoming authoritarian or utopian/dystopian

    Might I recommend The Three-Body Problem and the trilogy overall by Liu Cixin? This phenomenal hard scifi work describes a space-faring future where the human species faces a common, external threat. After all, much of today’s progress was yesterday’s scifi. So why not look to scifi to see what tomorrow’s solutions might be. It’s no worse than my crystal ball, which is foggy and in need of repair.


  • If you’re in the USA, I cannot understate how useful it may be to refer to the US Bureau of Labor Statistics’s (BLS) Occupational Outlook Handbook (OOH), a resource which I believe has no direct comparison:

    How can I learn about an occupation that is of interest to me?

    The Occupational Outlook Handbook (OOH) provides information on what workers do; the work environment; education, training, and other qualifications; pay; the job outlook; information on state and area data; similar occupations; and sources of additional information for more than 300 occupational profiles covering about 4 out of 5 jobs in the economy.

    As for answering the question, anecdotal conversations I’ve had suggest that the trades (eg glazier, electrician, plumber) in the USA are promising fields, since while the nature of the job might change with different needs, people still require electric wires and piped water. But the OOH could give you more specific outlooks for those specific trades.

    I was once told that plumbers can make very serious sums of money, even if they’re only ever installing supply-side piping. That is to say, the plumbing for water supply, as compared to drainage or sewer pipe, which are generally perceived as less appealing.


  • I’ll take a shot from the hip at this question, but note that I won’t add my customary citations or links.

    The stock market is the paragon of property and trusts, contracts, corporations and law, and the capitalist socio-economic system. The very existence of the stock market implies a society that has some or most of these concepts.

    For example, for shares to be traded, there generally must exist ownership rights upon the shares, distinct from the ownership rights that the company has of its own property. Or if not outright ownership of a share, then the benefit that a share provides (eg dividends). It also implies a legal system that will enforce these rights and the obligations of the company to its shareholders.

    For a tradable company to exist, it must be organized/chartered as an entity distinct from any single person. This is different than the feudal days, when ventures would be undertaken “in right of the King” or some member of the nobility. The feudal method wouldn’t work for modern companies, or else the King/Duke/Count/whatever could stiff the shareholders by just taking all the earnings. The company still needs to be created by legal means, either an Act of Parliament/Congress, by letter patent from the Monarch, or the modern administrative method of applying to the state Secretary of State (USA) or Companies House (UK) as examples.

    Even the structure of a for-profit tradeable company – when compared to a state-owned enterprise, a non-profit, a co-op, or an NGO or QUANGO – is a representation of the values inherent to capitalism. A company is obliged to use the shareholders’ funds – which is held by the company but is owed to the shareholders – to extract the greatest return. But this can come in many forms.

    Short-term value from buying investments and quickly flipping them (eg corporate home buyers) is different than rent-seeking (eg corporate landlords) and is still different than long-term investments that actively work to build up the value (eg startup incubators, private wealth funds, Islamic banking, transit-owned adjacent property). If a for-profit company doesn’t have a plan to extract a return… they’re in hot water with the shareholders, with penalties like personal liability for malfeasance.

    Another way of looking at the stock market is that if you have all the underlying components but don’t yet have a stock market, it would soon appear naturally. That is to say, if the public stock markets were banned overnight, shares would still trade but just under the table and without regulation. But if any critical part underpinning the markets stopped existing, then the market itself would collapse.

    History shows numerous examples where breakdowns of the legal system resulted in market mayhem, or when corporate property is expropriated for the Monarch’s wars or personal use, or when funds invested into or paid out of companies is hampered by terrible monetary inflation.

    As for what the stock market does, its greatest purpose is to organize investments into ventures. Historically, ventures were things like building a ship to sail to the New World and steal obtain goods to sell at home. Merchant ships were and are still very expensive, so few singular persons could afford it. And even if the could, the failure of the venture could be catastrophic for that person’s finances. Better to spread the risk and the reward amongst lots of people.

    What was once the sole domain of the landed gentry and nobility, slowly opened to the nouveau riche during the Industrial Revolution(s), then in turn to everyday people… for better or worse. It’s now almost trivial to buy a share in any particular listed company, but just opening the stock market to everyone would have been chaotic at best. I think it’s NYSE that still has on-floor traders/brokers, but imagine if all shares in that market had to be traded in a single room, with no digital trading. It’s already quite lively on the trading floor today, now add all the trades from middle class Americans on payday. It would become physically impossible.

    Likewise, a pure capitalist stock market would permit awful things like bribing journalists to write fake stories to crash a stock, then buy it for cheap. Or pump and dump scams. And would have no “circuit breakers” that halt a share during so-called flash crashes.

    I’m reminded of a scene from the ITV show Agatha Christie’s Poirot in the episode “Appointment With Death”, where a wealthy woman is not only murdered but her business empire collapses because the murderer also spooks the markets as a double whammy, causing investors to panic and sell up. The relevant implications here is that despite her company not having changed its financial picture, it got cut up for scrap and thus lost most of its value, rendering the business worthless in the end. Companies are usually valued more as a going-concern, above what all its property put together would amount to. Where does that additional value come from? It’s the prospect of a return from this particular assemblage of resources.

    Suffice it to say, the stock market is a lot of things. But I view it as a natural result of certain other prerequisites, meaning we can’t really get rid of it, so instead it should be appropriately regulated.


  • Agreed, it’s a very bad design. If your school speed limit covers most of the daylight hours on weekdays, is the implicit suggestion that it’s fine to drive faster on weekends and during nighttime? The street should be rebuilt to enforce the desired speed limits, not with paint or signs.

    Oh, we’re talking about the letters on the glass. My bad lol


  • If you hold a patent, then you have an exclusive right to that invention for a fixed period, which would be 20 years from the filing date in the USA. That would mean Ford could not claim the same or a derivative invention, at least not for the parts which overlap with your patent. So yes, you could sit on your patent and do nothing until it expires, with some caveats.

    But as a practical matter, the necessary background research, the application itself, and the defense of a patent just to sit on it would be very expensive, with no apparent revenue stream to pay for it. I haven’t looked up what sort of patent Ford obtained (or maybe they’ve merely started the application) but patents are very long and technical, requiring whole teams of lawyers to draft properly.

    For their patent to be valid, they must not overlap with an existing claim, as well as being novel and non-obvious, among other requirements. They would only do this to: 1) protect themselves from competition in future, 2) expect that this patent can be monetized by directly implementing it, or licensing it out to others, or becoming a patent troll and extracting nuisance-value settlements, or 3) because they’re already so deep in the Intellectual Property land-grab that they must continue to participate by obtaining outlandish patents. The latter is a form of “publish or perish” and allows them to appear like they’re on the cutting edge of innovation.

    A patent can become invalidated if it is not sufficiently defended. This means that if no one even attempts to infringe, then your patent would be fine. But if someone does, then you must file suit or negotiate a license with them, or else they can challenge the validity of your patent. If they win, you’ll lose your exclusive rights and they can implement the invention after-all. This is not cheap.


  • I’ll address your question in two parts: 1) is it redundant to store both the IP subnet and its subnet mask, and 2) why doesn’t the router store only the bits necessary to make the routing decision.

    Prior to the introduction of CIDR – which came with the “slash” notation, like /8 for the 10.0.0.0 RFC1918 private IPv4 subnet range – subnets would genuinely be any bit arrangement imaginable. The most sensible would be to have contiguous MSBit-justified subnet masks, such as 255.0.0.0. But the standard did not preclude using something unconventional like 255.0.0.1.

    For those confused what a 255.0.0.1 subnet mask would do – and to be clear, a lot of software might prove unable to handle this – this is describing a subnet with 2^23 addresses, where the LSBit must match the IP subnet. So if your IP subnet was 10.0.0.0, then only even numbered addresses are part of that subnet. And if the IP subnet is 10.0.0.1, then that only covers odd numbered addresses.

    Yes, that means two machines with addresses 10.69.3.3 and 10.69.3.4 aren’t on the same subnet. This would not be allowed when using CIDR, as contiguous set bits are required with CIDR.

    So in answer to the first question, CIDR imposed a stricter (and sensible) limit on valid IP subnet/mask combinations, so if CIDR cannot be assumed, then it would be required to store both of the IP subnet and the subnet mask, since mask bits might not be contiguous.

    For all modern hardware in the last 15-20 years, CIDR subnets are basically assumed. So this is really a non-issue.

    For the second question, the router does in-fact store only the necessary bits to match the routing table entry, at least for hardware appliances. Routers use what’s known as a TCAM memory for routing tables, where the bitwise AND operation can be performed, but with a twist.

    Suppose we’re storing a route for 10.0.42.0/24. The subnet size indicates that the first 24 bits must match a prospective destination IP address. And the remaining 8 bits don’t matter. TCAMs can store 1’s and 0’s, but also X’s (aka “don’t cares”) which means those bits don’t have to match. So in this case, the TCAM entry will mirror the route’s first 24 bits, then populate the rest with X’s. And this will precisely match the intended route.

    As a practical matter then, the TCAM must still be as wide as the longest possible route, which is 32 bits for IPv4 and 128 bits for IPv6. Yes, I suppose some savings could be made if a CIDR-only TCAM could conserve the X bits, but this makes little difference in practice and it’s generally easier to design the TCAM for max width anyway, even though non-CIDR isn’t supported on most routing hardware anymore.