How to adjust the margins of odd and even pages separately in PDF document:
Step one: Split your PDF into separate files for odd pages and even pages
$ pdftk demo.pdf cat odd output odds.pdf $ pdftk demo.pdf cat even output evens.pdf
Step two: Set margins ("left top right bottom" in units of 72ndths of an inch)
$ pdfcrop --margins "144 36 36 36" odds.pdf modds.pdf $ pdfcrop --margins "36 36 144 36" evens.pdf mevens.pdf
Step three: Shuffle (interleave) the pages back together
$ pdftk O=modds.pdf E=mevens.pdf shuffle O E output printme.pdf
Here is how to get the page number of a label elsewhere in your document in typst:
#let get_page_number = q => locate(query_token => counter(page).at(query(q, query_token).at(0).location()).at(0))
Example usage:
== Forest<forest>
This is a forest, with trees in all directions. To the east, there appears to be sunlight.
To go north, go to page #get_page_number(<clearing>).\
To go south, go to page #get_page_number(<south_forest>).\
To go east, go to page #get_page_number(<forest_path>).
#pagebreak(to: "even")
== Clearing<clearing>
You are in a clearing, with a forest surrounding you on all sides. A path leads south.
Complete concrete example: typst source and compiled pdf.
(LaTeX calls this \pageref
)
Underappreciated nixpkgs feature: Instead of building in the normal minimal sandbox environment, just say runInLinuxImage
to install a different Linux distro in a temporary VM, install some of that distro's packages, and run the build there! nixpkgs even packages 19 other releases from four other distros to be used with this feature:
Distro | Architecture | |
---|---|---|
x86_64 | i386 | |
Debian | 10.13-buster 11.6-bullseye |
10.13-buster 11.6-bullseye |
Ubuntu | 14.04-trusty 16.04-xenial 18.04-bionic 20.04-focal 22.04-jammy |
14.04-trusty 16.04-xenial 18.04-bionic 20.04-focal 22.04-jammy |
Fedora | 26 27 |
|
Centos | 6 7 |
6 |
I was having trouble cross-compiling an embedded ARM executable from a x86_64 machine while working with the Raspberry Pi Pico SDK—their build instructions just weren't working. Their build instructions "assume you are using Raspberry Pi OS running on a Raspberry Pi 4, or an equivalent Debian-based Linux distribution running on another platform." I thought it was going to be a chore to set up and use a VM for this, but it was this easy to just say "Nah, build it in Debian," with the list of Debian packages they said to install:
let foo = { lib, stdenv, ... }: stdenv.mkDerivation { pname = "foo"; version = ...; ... + diskImage = pkgs.vmTools.diskImageFuns.debian11x86_64 { + extraPackages = [ + "build-essential" + "cmake" + "gcc-arm-none-eabi" + "libnewlib-arm-none-eabi" + "libstdc++-arm-none-eabi-newlib" + ]; + }; + diskImageFormat = "qcow2"; } -in pkgs.callPackage foo { } +in pkgs.vmTools.runInLinuxImage (pkgs.callPackage foo { })
Recent pontifications on Async Rust:
I.e., vibrant engagement, progress, and things are very much not yet done and settled. Rust-the-language is async, or maybe async-compatible, but a big chunk of async functionality comes from external crates and those aren't done cooking yet.
Between these two observations:
for now it seems best to avoid async rust unless it's absolutely necessary to your use case. This is especially important for library authors who would do well to avoid allowing details about specific async runtimes—or use of async at all—to leak into their APIs, lest they later find themselves needing to deprecate or awkwardly support today's transient async idioms.
These exchanges were quite helpful to me; thank you, Kline, Nunley, and Endler! A thing I'm working on at first seemed equally approachable with async and CSP, but from these exchanges I conclude that Rust is currently ready for async experimentation and ready for async in critical use cases, but not yet ready for casual async in libraries, so I'll be sticking with threads and channels for now.
A simple NixOS module that each day pops up today's entry in the book The Daily Stoic:
(Separate script, service, and timer files for non-NixOS folks.)
{ pkgs, ... }: let book = pkgs.fetchurl { url = "https://archive.org/download/the-daily-stoic-366-meditations-on-wisdom-perseverance-and-the-art-of-living-pdfdrive.com/The-Daily-Stoic_-366-Meditations-on-Wisdom-Perseverance-and-the-Art-of-Living-PDFDrive.com-.pdf"; hash = "sha256:0h6iacsg66fnvxwxwsk3mncf6g63miqw1jpvw77cn8f0vrg69z2w"; }; daily-stoic = pkgs.writeShellScript "daily-stoic" '' set -euo pipefail date=${pkgs.coreutils}/bin/date day=''${1:-$($date +%d)} month=''${2:-$($date +%m)} year=2004 # A leap year day_of_year=$($date -d "$year-$month-$day" +%j) day_of_year=''${day_of_year##0} day_of_year=''${day_of_year##0} quarter=$(( (month-1) / 4)) page=$((13 + quarter + month + day_of_year)) ${pkgs.evince}/bin/evince --page-index="$page" ${book} ''; in { systemd.user.services.daily-stoic = { description = "Daily Stoic reading"; serviceConfig = { ExecStart = daily-stoic; TimeoutStartSec = 3600 * 23; Type = "oneshot"; }; wantedBy = [ "default.target" ]; startAt = "8:00"; }; }
The Center for AI Safety organized a simple Statement on AI Risk. The statement, in its entirety:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Hooray for short, narrow, to-the-point consensus-building mechanisms like this!
Notably, Eliezer Yudkowsky signed this one (after declining to sign the FLI Pause Letter).
trans=tcp
Sometime between Linux 5.15 and 6.1 (probably 5.17?), mounting a network address as a 9p filesystem came to require the mount option -o trans=tcp
. Without it, mount now fails with the error message:
mount: /mnt: special device 127.0.0.1 does not exist.
and dmesg
says
9pnet_virtio: no channels available for device 127.0.0.1
nix-collect-garbage --delete-older-than
is too aggressive.
Consider this scenario: A machine has been out of service for awhile -- a laptop on a shelf. Upon starting it up again, it does a software update, and then nix-collect-garbage --delete-older-than 90d
deletes everything, leaving only what it has just updated to. All your rollback options are gone. If anything's wrong with that latest update, the machine will need tedious manual recovery.
Age isn't the only criterion for whether I'm done with a profile or not. I also want to keep the last few known-good profiles. As a zero-effort, automate-able approximation to 'known-good', I'll settle for keeping around the last few profiles that the machine ran on for awhile. Say if the machine typically has weekly updates, keep profiles that were active for 5 days. This would require a mechanism that keeps track of how much 'active' time each profile accumulates.
I made a thing that does this. It records the currently-active profiles periodically and then attaches an ExecStartPre
to the normal nix-gc
service where it goes through those logs and more carefully cleans up old profiles before the normal nix-collect-garbage
runs.
As Greg_Colbourn and Simeon_Cps explain, since FLI's Pause Letter and Yudkowsky's rebuttal, it's suddenly okay to talk about how
So that's good.
Maybe it's even okay to talk about how very hard the problem is and how we're not really engaging with the hard parts of it yet, such that we may have to use social coordination to manage the problem for quite a long time before we find technical solutions?
It still feels weird to talk about the stakes being higher than merely killing all humans.
Hooray for progress.
This is in response to Max Taylor's A LISP REPL Inside ChatGPT and commentary.
It is not surprising that ChatGPT can respond to (factorial 5)
with 120. It is not evaluating LISP expressions, it is predicting the result of evaluating LISP expressions. We can tease apart the difference by testing with an incorrect implementation of Y
:
This Y
differs from a correct implementation by substituting (funcall g f)
for what should be (funcall g g)
in both places that it appears in the body of Y
. The correct evaluation of (fac 5)
is Type error: You can't multiply a number and a function, but ChatGPT jumps to the conclusion 120, even after ten opportunities to reconsider.
... but ChatGPT did impress me by correctly handling this simpler test, where I defined a function named factorial
that was not in fact the factorial function, and ChatGPT took the function body into account rather than just pattern-matching on (factorial 5)
→ 120:
Rather than choosing a scripting language for mods/plugins/extensions, you can now choose "Well, all of them!" Currently five guest languages are supported in thirteen host languages, but more are coming. It's mostly other peoples' job to make sure more are coming, but if your preferred guest language is not yet supported and if your use case requires sandboxing or any kind of permissions management (which is what makes this harder than just using the FFI), it might be about the same amount of work to roll your own plugin architecture as it would be to add support for that language to Extism. If you add the language to Extism, you also get all the other guest languages for free, and also all the other Extism users can then use the language you added support for.
See appcypher's list of projects to make various languages work in WebAssembly, which is the first and largest step in making a new guest language work in Extism. I'm most looking forward to Python and Lua guest support, both of which already have working WebAssembly prototypes (python demos: cheimes, pyodide).
Current image generation models are notoriously bad at generating hands. What happens when how-to-draw teaching materials are included in the training set and then you ask it to generate teaching materials for how to draw hands?
Sources: dieki and treyratcliff from r/midjourney.
Stars move! Which other star is closest changes over time:
Very roughly, the closest star system will be:
Until | 27,000 CE | Proxima Centauri | |
27,000 CE | to | 35,000 CE | Alpha Centauri AB |
35,000 CE | to | 44,000 CE | Ross 248 |
44,000 CE | to | 45,000 CE | Alpha Centauri AB |
45,000 CE | to | 52,000 CE | Gliese 445 |
52,000 CE | to | 80,000 CE | Alpha Centauri AB |
80,000 CE | and for awhile | Ross 128 |
Which network tool is in which package?
busybox
|
iproute_mptcp
|
iproute
|
cni-plugins
|
inetutils
|
nettools
|
nettools_mptcp
|
cope
|
toybox
|
iputils
|
unixtools.nettools
|
nmap
|
nmap-graphical
|
jwhois
|
netcat-gnu
|
arping
|
bandwidth
|
hostname
|
libuuid
|
logger
|
nedit
|
netkittftp
|
tcpdump
|
tftp-hpa
|
traceroute
|
unixtools.arp
|
unixtools.ifconfig
|
unixtools.netstat
|
unixtools.ping
|
unixtools.route
|
util-linux
|
wget
|
whois
|
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ifconfig | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||
hostname | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||
netstat | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||
arp | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||
logger | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||
ping | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||
route | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||
dnsdomainname | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||
ip | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||||
nc | ✓ | ✓ | ✓ | × | |||||||||||||||||||||||||||||
tftp | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||||
traceroute | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||||
whois | ✓ | ✓ | ✓ | ✓ | |||||||||||||||||||||||||||||
arping | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
bridge | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
nameif | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
nmap | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
ping6 | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
slattach | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
tc | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
wget | ✓ | ✓ | ✓ | ||||||||||||||||||||||||||||||
arpd | ✓ | ✓ | |||||||||||||||||||||||||||||||
bandwidth | ✓ | ✓ | |||||||||||||||||||||||||||||||
ctstat | ✓ | ✓ | |||||||||||||||||||||||||||||||
devlink | ✓ | ✓ | |||||||||||||||||||||||||||||||
domainname | ✓ | ✓ | |||||||||||||||||||||||||||||||
genl | ✓ | ✓ | |||||||||||||||||||||||||||||||
ifstat | ✓ | ✓ | |||||||||||||||||||||||||||||||
lnstat | ✓ | ✓ | |||||||||||||||||||||||||||||||
mii-tool | ✓ | ✓ | |||||||||||||||||||||||||||||||
ncat | ✓ | ✓ | |||||||||||||||||||||||||||||||
netcat | ✓ | ✓ | |||||||||||||||||||||||||||||||
nisdomainname | ✓ | ✓ | |||||||||||||||||||||||||||||||
nping | ✓ | ✓ | |||||||||||||||||||||||||||||||
nstat | ✓ | ✓ | |||||||||||||||||||||||||||||||
plipconfig | ✓ | ✓ | |||||||||||||||||||||||||||||||
rarp | ✓ | ✓ | |||||||||||||||||||||||||||||||
rdma | ✓ | ✓ | |||||||||||||||||||||||||||||||
routel | ✓ | ✓ | |||||||||||||||||||||||||||||||
rtacct | ✓ | ✓ | |||||||||||||||||||||||||||||||
rtmon | ✓ | ✓ | |||||||||||||||||||||||||||||||
rtstat | ✓ | ✓ | |||||||||||||||||||||||||||||||
ss | ✓ | ✓ | |||||||||||||||||||||||||||||||
tcpdump | ✓ | ✓ | |||||||||||||||||||||||||||||||
telnet | ✓ | ✓ | |||||||||||||||||||||||||||||||
tipc | ✓ | ✓ | |||||||||||||||||||||||||||||||
tracepath | ✓ | ✓ | |||||||||||||||||||||||||||||||
ypdomainname | ✓ | ✓ | |||||||||||||||||||||||||||||||
clockdiff | ✓ | ||||||||||||||||||||||||||||||||
dcb | ✓ | ||||||||||||||||||||||||||||||||
dhcp | ✓ | ||||||||||||||||||||||||||||||||
firewall | ✓ | ||||||||||||||||||||||||||||||||
ftp | ✓ | ||||||||||||||||||||||||||||||||
host-device | ✓ | ||||||||||||||||||||||||||||||||
host-local | ✓ | ||||||||||||||||||||||||||||||||
ifcfg | ✓ | ||||||||||||||||||||||||||||||||
ipvlan | ✓ | ||||||||||||||||||||||||||||||||
jwhois | ✓ | ||||||||||||||||||||||||||||||||
loopback | ✓ | ||||||||||||||||||||||||||||||||
macvlan | ✓ | ||||||||||||||||||||||||||||||||
ninfod | ✓ | ||||||||||||||||||||||||||||||||
portmap | ✓ | ||||||||||||||||||||||||||||||||
ptp | ✓ | ||||||||||||||||||||||||||||||||
rarpd | ✓ | ||||||||||||||||||||||||||||||||
rcp | ✓ | ||||||||||||||||||||||||||||||||
rdisc | ✓ | ||||||||||||||||||||||||||||||||
rexec | ✓ | ||||||||||||||||||||||||||||||||
rlogin | ✓ | ||||||||||||||||||||||||||||||||
routef | ✓ | ||||||||||||||||||||||||||||||||
rsh | ✓ | ||||||||||||||||||||||||||||||||
rtpr | ✓ | ||||||||||||||||||||||||||||||||
sbr | ✓ | ||||||||||||||||||||||||||||||||
socklist | ✓ | ||||||||||||||||||||||||||||||||
static | ✓ | ||||||||||||||||||||||||||||||||
talk | ✓ | ||||||||||||||||||||||||||||||||
tuning | ✓ | ||||||||||||||||||||||||||||||||
vdpa | ✓ | ||||||||||||||||||||||||||||||||
vlan | ✓ | ||||||||||||||||||||||||||||||||
vrf | ✓ |
(This table made in anger after a build failure in telnetd
, which obviously nobody wants, blocked installation of ping
, which obviously everybody wants, because they both come from the inetutils
package. ಠ_ಠ )
It's often useful to get a notification when a webpage changes. Here's a simple shell script for that:
#!/usr/bin/env bash set -euo pipefail url=$1 query=${2:-} state="${XDG_DATA_HOME:-$HOME/.local/share}/web-watch/${url//\//_}-${query//\//_}" mkdir -p "$(dirname "$state")" tmp=$(mktemp) curl --silent --fail --location --output "$tmp" "$url" if [[ "$query" ]];then processed=$(mktemp) xmllint --html --xpath "$query" "$tmp" > "$processed" 2>/dev/null mv "$processed" "$tmp" fi [[ -e "$state" ]] && diff -u "$state" "$tmp" || : mv "$tmp" "$state"
I.e., each time it runs, it shows a diff between the content the last time it ran and the current content. Optionally, it can watch only part of the page, specified by an XPath query.
It works very nicely with cron. Some examples:
50 3 * * 2 web-watch https://spectrum-os.org/ 40 5 * * * web-watch https://revolutionrobotics.org/collections/all '//*[contains(@class,"grid-product__tag") or contains(@class, "collection-filter__item--count")]/text()' # # New chapter available? 33 7 * * 3 web-watch https://aphyr.com/tags/interviews '//article//h1//text()' 24 4 * * * web-watch https://palewebserial.wordpress.com/table-of-contents/ '//article//a//text()' 43 2 * * * web-watch https://www.royalroad.com/fiction/45534/this-used-to-be-about-dungeons '//table[@id="chapters"]//td[1]/a/text()' 37 6 * * * web-watch https://www.projectlawful.com/ '//*[contains(@class, "post-subject") or contains(@class, "post-replies")]/a[1]/text()' 38 6 * * * web-watch https://www.projectlawful.com/board_sections/721 '//*[contains(@class, "post-subject") or contains(@class, "post-replies")]/a[1]/text()' 47 7 * * * web-watch https://mangaclash.com/manga/tomo-chan-wa-onnanoko/ '//li[contains(@class, "wp-manga-chapter")]/a/text()' # New versions available? 44 5 * * 0 web-watch https://nixos.org/download.html '//*[@id = "download-nixos"]//a/text()' 36 4 * * * web-watch https://www.qemu.org/download/ '//article[@id="source"]//a[not(contains(@href,"wiki.qemu.org"))][contains(text(),".")]/text()' 23 2 * * * web-watch https://www.gnucash.org/ '//h2[@id="dwnld-box"]/text()' 41 0 * * * web-watch https://liballeg.org/feed_atom.xml '//*[local-name()="title" and contains(., "released")]/text()' 37 2 * * * web-watch https://pypi.org/rss/project/backoff/releases.xml '//*[local-name()="title"]/text()' 44 6 * * * web-watch https://ftp.gnu.org/gnu/gawk/ '//a/text()' 27 7 * * * web-watch https://ftp.gnu.org/gnu/gzip/ '//a/text()' 46 0 * * * web-watch https://www.minetest.net/downloads/ '//a[contains(text(), " portable")]/text()' 18 3 * * * web-watch 'https://git.librecmc.org/?p=librecmc/librecmc.git;a=tags' '//tr[position()<3]//a[@class="list name"]/text()'
To cut straight across a text block, a book-trimming knife must be ground only on one side ("singled beveled") and quite rigid, or it will meander over the course of the cut: like a chisel, except that it is used for slicing / draw cuts rather than push cuts. A very sharply angled skew chisel can work, but these are harder to find / more expensive than mass-produced flat chisels. A rounded chisel works even better, and it is easy to make one from a cheap flat chisel:
Power tools recommended! But you don't need a specialty tool: I used a plain ol' drill and a $5 (Shark, Miller, Forney) grinding wheel I found at the local hardware store awhile back:
See also Jesse Aston's, Glenn Malkin's, and Darryn A. Schneider's related videos.
Update:Sometimes you can find "skiving knives" with this blade shape, but they're often much shorter, which means you'd need to use a much narrower guide and/or be unable to trim thicker books.
OpenPGP signatures should be checked when projects provide them. To do this in a publicly-verifiable way, the signature check can be done as part of the build process. An example:
{ fetchurl, gnupg, runCommand, }: let version = "5.4"; in rec { tails-signing-key = fetchurl { url = "https://tails.boum.org/tails-signing.key"; sha256 = "1sa6kc1icwf8y1smfqfy3zxh9z687zrm59whn2xj4s98wqg39wbh"; }; unverified-tails-iso = fetchurl { url = "https://ftp.nluug.nl/os/Linux/distr/tails/tails/stable/tails-amd64-${version}/tails-amd64-${version}.iso"; sha256 = "142nw4gp24pn1ndx6rk78bbam78pbmwgnzfs0zmb9vv1s4lp15wa"; }; tails-iso-signature = fetchurl { url = "https://tails.boum.org/torrents/files/tails-amd64-${version}.iso.sig"; sha256 = "1f0l6mwy6nw8817a5p5a798arqklbv3fkv3d3p45pzinr57ny6dc"; }; verified-tails-iso = runCommand "verified-tails-iso" { } '' set -euo pipefail GNUPGHOME=$(mktemp -d) export GNUPGHOME ${gnupg}/bin/gpg --import ${tails-signing-key} ${gnupg}/bin/gpg --verify ${tails-iso-signature} ${unverified-tails-iso} && ln -s ${unverified-tails-iso} $out ''; }
Version bumps change the fetch hashes of the signed resource and the signature, but not the signing key:
@@ -1,6 +1,6 @@ { fetchurl, gawk, gnupg, gnused, qemu_kvm, runCommand, socat, stdenvNoCC, wmctrl , writeShellScriptBin, }: -let version = "5.3.1"; +let version = "5.4"; in rec { tails-signing-key = fetchurl { @@ -11,11 +11,11 @@ unverified-tails-iso = fetchurl { url = "https://ftp.nluug.nl/os/Linux/distr/tails/tails/stable/tails-amd64-${version}/tails-amd64-${version}.iso"; - sha256 = "12riynxzwv0f6cl5jkp8z1zszxxzfrk2kmf4f9g118ypwjzy352p"; + sha256 = "142nw4gp24pn1ndx6rk78bbam78pbmwgnzfs0zmb9vv1s4lp15wa"; }; tails-iso-signature = fetchurl { url = "https://tails.boum.org/torrents/files/tails-amd64-${version}.iso.sig"; - sha256 = "0s50m12g6lsrwwrvm79wrq7lyvwgha12ajc1qi6sr1dxn48zvxp7"; + sha256 = "1f0l6mwy6nw8817a5p5a798arqklbv3fkv3d3p45pzinr57ny6dc"; }; verified-tails-iso = runCommand "verified-tails-iso" { } ''
In Rainbows End, electronic devices all have several MEMS-aimable low-power lasers for high-throughput dynamic networking. Some folks at Sejong University are working on line-of-sight infrared laser power transmission. This will need dynamic aiming in order to be usefully deployed, but can start out in easy mode of connecting with other devices that don't move or move rarely and then gently climb the challenge curve up to tracking moving hand-held and worn devices. Meanwhile, it will be easy to modulate the power beam for a moderate-bandwidth communication channel, or later add higher frequencies to the beam for ludicrous bandwidth.
ioctl
Yourself DirectlyAnd this 2013 Slackware thread is still the canonical source of this information.
Example usage:
#include <err.h> #include <fcntl.h> #include <linux/cdrom.h> #include <stdlib.h> #include <sysexits.h> #include <sys/ioctl.h> #include <unistd.h> int main(int argc, char **argv) { int cdrom; int drive_status; if (argc != 2) { errx(EX_USAGE, "Usage: tray_open device. For example: tray_open /dev/cdrom"); } if ((cdrom = open(argv[1], O_RDONLY | O_NONBLOCK)) == -1) { err(EX_NOINPUT, "Unable to open device %s", argv[1]); } if ((drive_status = ioctl(cdrom, CDROM_DRIVE_STATUS)) == -1) { err(EX_IOERR, "Cannot determine tray status of %s", argv[1]); } if (close(cdrom) == -1) { err(EX_IOERR, "Unable to close device %s", argv[1]); } exit(drive_status == CDS_TRAY_OPEN ? 0 : 1); }