Firefox user-overrides.js update

I updated my user-overrides.js again. You can grab it here (Ctrl+S). Changes follow…

* added 'toolkit.legacyUserProfileCustomizations.stylesheets'
* added 'privacy.donottrackheader.enabled' to testing
* added 'extensions.blocklist.enabled' to testing
* added 'dom.push.enabled'
* added 'dom.push.connection.enabled'
* added 'dom.push.serverURL'
* added 'dom.push.userAgentID'
* added 'network.trr.mode'
* added 'network.trr.bootstrapAddress'
* added 'network.trr.uri'
* updated instructions

Firefox user-overrides.js update

I updated my user-overrides.js file. You can grab it here (Ctrl+S). Following are the changes since the last version…

* misc. edits to intro text
* changed/corrected some pref descriptions
* removed some unneeded prefs related to smooth scrolling
* removed all [DEF=*] tags since these vales could be changed by Mozilla at any time, yet are rarely ever re-checked
* removed 'alerts.showFavicons'
* removed 'browser.taskbar.lists.enabled' – duplicate of user.js
* removed 'browser.taskbar.lists.frequent.enabled' – duplicate of user.js
* removed 'browser.taskbar.lists.recent.enabled' – duplicate of user.js
* removed 'browser.taskbar.lists.tasks.enabled' – duplicate of user.js
* removed 'browser.taskbar.previews.enable' – duplicate of user.js
* removed 'extensions.blocklist.url' – essentially same as user.js
* removed 'network.trr.uri'
* changed value of 'layers.geometry.opengl.enabled' – temp prefs
* moved 'privacy.trackingprotection.cryptomining.enabled' from the testing section to the user customization section
* added 'extensions.htmlaboutaddons.discover.enabled' to temp section
* added 'browser.newtabpage.activity-stream.asrouter.userprefs.cfr.features'
* added 'privacy.resistFingerprinting.letterboxing'

Just a reminder that this user-overrides.js contains My Personal Settings and so those using it will need to edit it. I relax some of the settings in the 'ghacks' user.js in return for a less troublesome browsing experience and in dong so i trade a little privacy for convenience, but i use a VPN also.

One of the newly exposed preferences in the 'ghacks' user.js is privacy.resistFingerprinting.letterboxing and it's a rather important preference that forces a generic inner window size which makes it harder for websites to fingerprint the browser. The caveat with the preference, when enabled, is that the you may notice an empty border around the web page content. I don't like that at all, so i disable this setting (for now). Just be aware that doing so makes the browser easier to fingerprint/track.

The other newly exposed preference is privacy.resistFingerprinting.letterboxing.dimensions. I haven't played with this yet, but this is used to set the dimensions of the viewport when the earlier preference is enabled. The idea is to use generic sizes to make fingerprinting harder. From the newest 'ghacks' user.js…

/* 4504: enable RFP letterboxing [FF67+]
 * Dynamically resizes the inner window (FF67; 200w x100h: FF68+; stepped ranges) by applying letterboxing,
 * using dimensions which waste the least content area, If you use the dimension pref, then it will only apply
 * those resolutions. The format is "width1xheight1, width2xheight2, ..." (e.g. "800x600, 1000x1000, 1600x900")
 * [NOTE] This does NOT require RFP (see 4501) **for now**
 * [WARNING] The dimension pref is only meant for testing, and we recommend you DO NOT USE it
 * [1] https://bugzilla.mozilla.org/1407366 ***/
user_pref("privacy.resistFingerprinting.letterboxing", true); // [HIDDEN PREF]
   // user_pref("privacy.resistFingerprinting.letterboxing.dimensions", ""); // [HIDDEN PREF]

Link Rot: How to find new sources for busted links

There are a variety of ways to find what you're looking for when you come across a broken link on the interwebs. Here's a few methods i like to use.

Search operators

The first thing you should know is how to use a search engine. Various search engines will attach a special meaning to certain characters and these 'search operators' as they're called can be really helpful. Here's some handy examples that work for Google as well as some other search engines (and no, you shouldn't be using Google directly):

OR : 'OR', or the pipe "|" character, tells the search engine you want to search for this OR that. For example cat|dog will return results containing 'cat' or 'dog', as will cat OR dog.

( ) : Putting words in a group separated by OR or | produces the same result as just described, however you can then add words outside of the group that you always want to see in the results. For example, (red|pink|orange) car will return results that have 'car' in them, as well as either red or pink or orange.

" ": If you wrap a "word" in double quotes, you are telling the search engine that the word is really important. If you wrap multiple words in double quotes, you are telling the search engine to look for pages containing "that exact phrase."

site: : If you want to search only a particular domain, such as 12bytes.org, append site:12bytes.org to your query, or don't include any search terms if you want it to return a list of pages for the domain. You can do the same when preforming an image search if you want to see all the images on a domain. You can also search a TLD (Top-Level Domain) using this operator. For example, to search the entire .gov TLD, just append site:.gov to your query.

-: If you prefix a word with a -hyphen, you are telling the search engine that you are not interested in results containing that word. You can do the same -"with a phrase" also.

cache:: Prefixing a domain with cache:, such as cache:12bytes.org, will return the most recent cached version of a page.

intitle: : If you prefix a word or phase with intitle:, you are telling the search engine that the word or phrase must be contained in the titles of the results.

allintitle: : Words prefixed with allintitle: tells the search engine that all words following this operator must be contained in the titles of the search results.

See this page for more examples.

Searching the archives

One of the simplest methods of finding the original target of a busted link is to copy the link location (right click the link and select 'Copy Link Location') and plug that into one of the web archive services. The two most popular, general archives that i'm aware of are the Internet Archive and Archive.is. The Internet Archive provides options to filter your search results for particular types of content, such as web pages, videos, etc.. In either case, just paste the copied link in the input field they provide and press your Enter key. If the link is 'dirty', cleaning it up may provide better results. For example, let's say the link is something like:

http://example.com/articles/1995/that-dog-dont-hunt?ref=example.com&partner=sombody&utm_source=google

The archive may not return any results for the URL, but it might if you clean it up by removing everything after 'hunt'.

There are also web browser extensions you can install to make accessing the archive services easier. For Firefox i like the View Page Archive & Cache add-on by 'Armin Sebastian'. When you find a dead link, just right-click it and from the 'View Page Archive' context menu you can select to search all of the enabled archives or just a specific one. Even if the page isn't dead you can right-click in the page and retrieve a cached copy or archive the page yourself. Another cool feature of this add-on is that it will place an icon in the address bar if you land on a dead page and you can just search for an archived version from the icon context menu.

Of these two services, the Internet Archive has a far more extensive library, but there's a very annoying caveat with it that defeats the purpose of an archive which is why i much prefer Archive.is. The Internet Archive follows robot.txt directives. I won't go into why i think this is stupid, suffice to say that content that is stored on the Internet Archive can be removed even if it does not break any of their rules.

Dead links and no clues

If all you have is a dead link with no title or description and you can't find a cached copy in one of the archives, you may still be able to find copy of the document somewhere. For example let's say the link is https://example.com/pages/my-monkey-stole-my-car.html. The likely title of the document you're looking for is right in the URL — my-monkey-stole-my-car — and you can plug that into a search engine just as it is, or remove the hyphens and wrap the title in double quotes to perform a phrase search. Also see some of the other examples here.

Dead links with some clues

If you come across a dead link that has a title or description, but isn't cached in an archive, you can use that to perform a search. Just select the title, or a short but unique phrase from the description (which preferably doesn't contain any punctuation), then wrap it in double quotes and perform a phrase search.

Dead internal website links

If you encounter a website that contains a broken link to another page on the same site and you have some information about the document, like a title or excerpt, you can do a domain search to see if a search engine may link to a working copy. For example, let's assume the title of the page we're looking for is 'Why does my kitten hate me?' on the domain 'example.com'. Copy the title, wrap it in double quotes and plug it into a search engine that supports phrase searches, add a space, then append site:example.com. This will tell the search engine to look for results only on example.com. Also see some of the other examples here.

YouTube videos you know exist but can't find

Because there is a remarkable amount of censorship taking place at YouTube, they will sometimes hide sensitive videos from their search results when you use the search engine provided by YouTube. To get around this, use another search engine to perform a domain search as described in the 'Dead internal website links' section.

Deleted videos

In some cases, such as with a link that points to a removed YouTube video, you may not have any information other than the URL itself, not even a page title. Using the YouTube link as an example, https://www.youtube.com/watch?v=abc123xyz, copy youtube.com/watch?v=abc123xyz, wrap it in double quotes and plug that into your preferred search engine. You will often find a forum or blog post somewhere that will provide helpful clues, such as the video title or description which you can use to search for a working copy of the video. And the first place to look for deleted YouTube videos is YouTube! You can also search the Internet Archive as well as other video platforms that are more censorship resistant than YouTube, including Dailymotion, BitChute, DTube, LEEKWire and many others.

Broken links on your own website

I don't know about you, but i have nearly 4,000 links on 12bytes.org as of this writing and many of them point to resources which TPTB (The Powers That [shouldn't] Be) would rather you knew nothing about. As such, many of the resources i link to are taken down and so i have to deal with broken links constantly, many of them deleted YouTube videos. If you run WordPress (self-hosted – i don't know about a wordpress.com site) you will find Broken Link Checker by 'ManageWP' in the WordPress plugin repository and it's job is to constantly scan your site to look for broken links. While it is not a bug-free plugin (the developer is not at all responsive and doesn't seem to fix anything in a timely manner), it is by far the most comprehensive tool of its type that i'm aware of. There are also many external services you could use whether you run WordPress or not.

Help! I'm at that point where…

This website has grown to the point where there's too much stuff for me to deal with and so i'd like to solicit some help. The Firefox junk alone is getting to be a rather large burden.

I'd like my readers to be more involved and make this more of a community project, especially those that know more than i do about the stuff i write about. At the moment i could use help with Everything Firefox, as well as the Rescuing Israel content, the latter of which needs a lot of work. Then there's a lengthy series of articles about the dangers of vaccines that i've been toying with for over a year and is still no where near publishing.

If you have knowledge in these areas, or in alternative/natural healthcare, along with a desire to become rich and famous (major LIE!), then i'd really like to hear from you. Following are the benefits of an editor position here at 12bytes.org:

Pay: $0
Medical insurance: 1 box of (mostly unused) Band-Aids/decade
Vacation: Virtual reality app, all expenses paid (by you)
Expense account: $0.00000000000/yr.
Knowing you're helping to educate others: $$$ PRICELESS $$$

I earn exactly nothing from this website, so it only costs me money to run. I'd be interested in monetizing it and sharing the earnings, but i am NOT interested anyone who want's to use it as an advertising platform for their own site (i've gotten several of these stupid offers and i ignore all of them). If you have some creative ideas as to how to monetize in an ETHICAL way that benefits readers, i'm open to discussing that, however i am primarily interested in improving existing content and adding to what is already here.

If you're interested, contact me.