From beebe at math.utah.edu Sat Jun 1 01:05:01 2019 From: beebe at math.utah.edu (Nelson H. F. Beebe) Date: Fri, 31 May 2019 09:05:01 -0600 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> Message-ID: [This is another essay-length post about O/Ses and ZFS...] Arthur Krewat asks on Thu, 30 May 2019 20:42:33 -0400 about my TUHS list posting about running a large computing facility without user disk quotas, and our experiences with ZFS: >> I have yet to play with Linux and ZFS but would appreciate to >> hear your experiences with it. First, ZFS on the Solaris family (including DilOS, Dyson, Hipster, Illumian, Omnios, Omnitribblix, OpenIndiana, Tribblix, Unleashed, and XStreamOS), the FreeBSD family (including ClonOS, FreeNAS, GhostBSD, HardenedBSD, MidnightBSD, PCBSD, Trident, and TrueOS), and on GNU/Linux (1000+ distributions, due to theological differences) offers important data safety features, and ease of management. There are lots of details about ZFS that you can find in the slides of a talk that we have given several times: http://www.math.utah.edu/~beebe/talks/2017/zfs/zfs.pdf The slides at the end of that file contain pointers to ZFS resources, including recent books. Some of the key ZFS features are: * all disks form a dynamic shared pool from which space can be drawn for datasets, on top of which filesystems can be created; * the pool can exploit data redundancy via various RAID Zn choices to survive loss of individual disks, and optionally, provide hot spares shared across the pool, and available to all datasets; * hardware RAID controllers are unneeded, and discouraged --- a JBOD (just a bunch of disks) array is quite satisfactory * all metadata, and all file data blocks, have checksums that are replicated elsewhere in the pool, and checked on EVERY read and write, allowing automatic silent recovery (via data redundancy) from transient or permanent errors in disk blocks --- ZFS is self healing; * ZFS filesystems can have unlimited numbers of snapshots; * snapshots are extremely fast, typically less than one second, even in multi-terabyte filesystems; * snapshots are readonly, and thus, immune to ransomware attacks; * ZFS send and receive operations allow propagation of copies of filesystems by transferring only data blocks that have changed since the last send operation; * the ZFS copy-on-write policy means that in-use blocks are never changed, and that block updates are guaranteed to be atomic; * quotas can optionally be enabled on datasets, and grown as needed (quota shrink is not yet possible, but is in ZFS development plans). * ZFS optionally supports encryption, data compression, block deduplication, and n-way disk replication; * Unlike traditional fsck, which requires disks to be offline during the checks, ZFS scrub operations can be run (usually by cron jobs, and at lower priority) to go through datasets to verify data integrity and filesystem sanity while normal services continue. ZFS likes to cache metadata, and active data blocks, in memory. Most of our VMs that have other filesystems, like EXT{2,3,4}, FFS, JFS, MFS, ReiserFS, UFS, and XFS, run quite happily with 1GB of DRAM. The ZFS, DragonFly BSD Hammer, and BTRFS ones are happier with 2GB to 4GB of DRAM. Our central fileservers have 256GB to 768GB of DRAM. The major drawback of copy-on-write and snapshots is that once a snapshot has been taken, a filesystem-full condition cannot be ameliorated by removing a few large files. Instead, you have to either increase the dataset quota (our normal practice), or you have to free older snapshots. Our view is that the benefits of snapshots for recovery of earlier file versions far outweigh that one drawback: I myself did such a recovery yesterday when I accidentally clobbered a critical file full of digital signature keys. On Solaris and FreeBSD families, snapshots are visible to users as read-only filesystems, like this (for ftp://ftp.math.utah.edu/pub/texlive and http://www.math.utah.edu/pub/texlive): % df /u/ftp/pub/texlive Filesystem 1K-blocks Used Available Use% Mounted on tank:/export/home/2001 518120448 410762240 107358208 80% /home/2001 % ls /home/2001/.zfs/snapshot AMANDA auto-2019-05-21 auto-2019-05-25 auto-2019-05-29 auto-2019-05-18 auto-2019-05-22 auto-2019-05-26 auto-2019-05-30 auto-2019-05-19 auto-2019-05-23 auto-2019-05-27 auto-2019-05-31 auto-2019-05-20 auto-2019-05-24 auto-2019-05-28 % ls /home/2001/.zfs/snapshot/auto-2019-05-21/ftp/pub/texlive Contents Images Source historic protext tlcritical tldump tlnet tlpretest That is, you first use the df command to find the source of the current mount point, then use ls to examine the contents of .zfs/snapshot under that source, and finally follow your pathname downward to locate a file that you want to recover, or compare with a current copy, or another snapshot copy. On Network Appliance systems with the WAFL filesystem design (see https://en.wikipedia.org/wiki/Write_Anywhere_File_Layout ), snapshots are instead mapped to hidden directories inside each directory, which is more convenient for human users, and is a feature that we would really like to see on ZFS. A nuisance for us is that the current ZFS implementation on CentOS 7 (a subset of the pay-for-service Red Hat Enterprise Linux 7) does not show any files under the .zfs/snapshot/auto-YYYY-MM-DD directories, except on the fileserver itself. When we used Solaris ZFS for 15+ years, our users could themselves recover previous file versions following instructions at http://www.math.utah.edu/faq/files/files.html#FAQ-8 Since our move to a GNU/Linux fileserver, they no longer can; instead, they have to contact systems management to access such files. We sincerely hope that CentOS 8 will resolve that serious deficiency: see http://www.math.utah.edu/pub/texlive-utah/README.html#rhel-8 for comments on the production of that O/S release from the recent major new Red Hat EL8 release. We have a large machine-room UPS, and outside diesel generator, so our physical servers are immune to power outages and power surges, the latter being a common problem in Utah during summer lightning storms. Thus, unplanned fileserver outages should never happen. A second issue for us is that on Solaris and FreeBSD, we have never seen a fileserver crash due to ZFS issues, and on Solaris, our servers have sometimes been up for one to three years before we took them down for software updates. However, with ZFS on CentOS 7, we have seen 13 unexplained reboots in the last year. Each has happened late at night, or in the early morning, while backups to our tape robot, and ZFS send/receive operations to a remote datacenter, are in progress. The crash times suggest to us that heavy ZFS activity is exposing a kernel or Linux ZFS bug. We hope that CentOS 8 will resolve that issue. We have ZFS on about 70 physical and virtual machines, and GNU/Linux BTRFS on about 30 systems. With ZFS, freeing a snapshot moves its blocks to the free list within seconds. With BTRFS, freeing snapshots often takes tens of minutes, and sometimes, hours, before space recovery is complete. That can be aggravating when it stops your work on that system. By contrast, snapshots on both BTRFS and ZFS are fast. However, they appear to be far smaller on ZFS than on BTRFS. We have VMs and physical machines with ZFS that have 300 to 1000 daily snapshots with little noticeable reduction in free space, whereas those with BTRFS seem to lose about a gigabyte a day. My home TrueOS system has sufficient space for about 25 years of ZFS dailies. Consequently, I run nightly reports of free space on all of our systems, and manually intervene on the BTRFS ones when space hits a critical level (I try to keep 10GB free). On both ZFS and BTRFS, packages are available to trim old snapshots, and we run the ZFS trimmer via cron jobs on our main fileservers. In the GNU/Linux world, however, only openSUSE comes by default with a cron-enabled BTRFS snapshot trimmer, so intervention is unnecessary on that O/S flavor. I have never installed snapshot trimmer packages on any of our other VMs, because it just means more management work to deal with variants in trimmer packages, configuration files, and cron jobs. Teams of ZFS developers from FreeBSD and GNU/Linux are working on merging divergent features back into a common OpenZFS code base that all O/Ses that support ZFS can use; that merger is expected to happen within the next few months. ZFS has been ported by third parties to Apple macOS and Microsoft Windows, so it has the potential of becoming a universal filesystem available on all common desktop environments. Then we could use ZFS send/receive instead of .iso, .dmg, and .img files to copy entire filesystems between different O/Ses. ------------------------------------------------------------------------------- - Nelson H. F. Beebe Tel: +1 801 581 5254 - - University of Utah FAX: +1 801 581 4148 - - Department of Mathematics, 110 LCB Internet e-mail: beebe at math.utah.edu - - 155 S 1400 E RM 233 beebe at acm.org beebe at computer.org - - Salt Lake City, UT 84112-0090, USA URL: http://www.math.utah.edu/~beebe/ - ------------------------------------------------------------------------------- From rp at servium.ch Sat Jun 1 01:55:25 2019 From: rp at servium.ch (Rico Pajarola) Date: Fri, 31 May 2019 17:55:25 +0200 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> References: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> Message-ID: On Fri, May 31, 2019 at 2:50 AM Arthur Krewat wrote: > On 5/30/2019 8:21 PM, Nelson H. F. Beebe wrote: > > Several list members report having used, or suffered under, filesystem > > quotas. > > > > At the University Utah, in the College of Science, and later, the > > Department of Mathematics, we have always had an opposing view: > > > > Disk quotas are magic meaningless numbers imposed by some bozo > > ignorant system administrator in order to prevent users from > > getting their work done. > > You've never had people like me on your systems ;) - But yeah... > > > For the last 15+ years, our central fileservers have run ZFS on > > Solaris 10 (SPARC, then on Intel x86_64), and for the last 17 months, > > on GNU/Linux CentOS 7. > > > I do the same with ZFS - limit the individual filesystems with "zfs set > quota=xxx" so the entire pool can't be filled. I assign a zfs filesystem > to an individual user in /export/home and when they need more, they let > me know. Various monitoring scripts tell me when a filesystem is > approaching 80%, and I either just expand it on my own because of the > user's usage, or let them know they are approaching the limit. > > Same thing with Netbackup Basic Disk pools in a common ZFS pool. I can > adjust them as needed, and Netbackup sees the change almost immediately. > > At home, I did this with my kids ;) - Samba and zfs quota on the > filesystem let them know how much room they had. > > art k. > > PS: I'm starting to move to FreeBSD and ZFS for VMware datastores, the > performance is outstanding over iSCSI on 10Gbe - (which Solaris 11's > COMSTAR is not apparently very good at especially with small block > sizes). I have yet to play with Linux and ZFS but would appreciate to > hear (privately, if it's not appropriate for the list) your experiences > with it. > At home I use ZFS (on Linux) exclusively for all data I care about (and also for data I don't care about). I have a bunch of pools ranging from 5TB to 45TB with RAIDZ2 (overall about 50 drives), in various hardware setups (SATA, SAS, some even via iSCSI). Performance is not what I'm used to on Solaris, but in this case, convenience wins over speed. I never lost any data, even though with that amount of disks, there's always a broken disk somewhere. The on-disk format is compatible with FreeBSD and Solaris (I have successfully moved disks between OSes), so you're not "locked in". -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at kjorling.se Sat Jun 1 02:06:22 2019 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Fri, 31 May 2019 16:06:22 +0000 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: References: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> Message-ID: <20190531160622.n3uwzr7hb2b2bpyn@h-174-65.A328.priv.bahnhof.se> On 31 May 2019 09:05 -0600, from beebe at math.utah.edu (Nelson H. F. Beebe): > Some of the key ZFS features are: > /snip/ > * snapshots are readonly, and thus, immune to ransomware > attacks; Let's hope said ransomware isn't smart enough to run "zfs list X -t snapshot" and "zfs destroy X at Y". And while "zfs list" is Mostly Harmless, let's hope the sysadmin is smart enough to not let arbitrary users run "zfs destroy" anything important. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “The most dangerous thought that you can have as a creative person is to think you know what you’re doing.” (Bret Victor) From gtaylor at tnetconsulting.net Sat Jun 1 02:15:48 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 31 May 2019 10:15:48 -0600 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <20190531160622.n3uwzr7hb2b2bpyn@h-174-65.A328.priv.bahnhof.se> References: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> <20190531160622.n3uwzr7hb2b2bpyn@h-174-65.A328.priv.bahnhof.se> Message-ID: <8dde1b5d-f37e-64f7-a43f-a91c539727b0@spamtrap.tnetconsulting.net> On 5/31/19 10:06 AM, Michael Kjörling wrote: > Let's hope said ransomware isn't smart enough to run "zfs list X -t > snapshot" and "zfs destroy X at Y". (Baring any local privilege escalation....) I think that ZFS would protect (snapshots) against ransomware running as an unprivileged user that can't run zfs / zpool commands. > And while "zfs list" is Mostly Harmless, let's hope the sysadmin is smart > enough to not let arbitrary users run "zfs destroy" anything important. I have found the zfs and zpool command sufficiently easy to allow limited access via appropriate sudoers entries. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From michael at kjorling.se Sat Jun 1 02:38:52 2019 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Fri, 31 May 2019 16:38:52 +0000 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <8dde1b5d-f37e-64f7-a43f-a91c539727b0@spamtrap.tnetconsulting.net> References: <8c0e17fb-41e3-753c-9678-04e410825dce@kilonet.net> <20190531160622.n3uwzr7hb2b2bpyn@h-174-65.A328.priv.bahnhof.se> <8dde1b5d-f37e-64f7-a43f-a91c539727b0@spamtrap.tnetconsulting.net> Message-ID: <20190531163849.mrxjkefr6d7b7gyt@h-174-65.A328.priv.bahnhof.se> On 31 May 2019 10:15 -0600, from tuhs at minnie.tuhs.org (Grant Taylor via TUHS): >>> * snapshots are readonly, and thus, immune to ransomware >>> attacks; >> >> Let's hope said ransomware isn't smart enough to run "zfs list X -t >> snapshot" and "zfs destroy X at Y". > > (Baring any local privilege escalation....) I think that ZFS would protect > (snapshots) against ransomware running as an unprivileged user that can't > run zfs / zpool commands. Yes, and that's the point I was (trying to) make: snapshots are only immune to ransomware as long as (a) said ransomware isn't running as root, and (b) said ransomware can't escalate to having root access (or whatever capabilities might be required to poke around ZFS snapshots), and of course (c) said ransomware doesn't know about ZFS snapshots. Snapshots definitely raise the bar, which is a good thing, not to mention how useful they are for bona fide "oh carp" moments. I do however feel that "immune" is a bit too strong a word. >> And while "zfs list" is Mostly Harmless, let's hope the sysadmin is >> smart enough to not let arbitrary users run "zfs destroy" anything >> important. > > I have found the zfs and zpool command sufficiently easy to allow limited > access via appropriate sudoers entries. I'm pretty sure at least ZoL for Debian comes packages with a sudoers file where all you need to do to allow read-only ZFS sudo access to normal users is uncomment one or a few lines. It's been a while since I set it up. -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “The most dangerous thought that you can have as a creative person is to think you know what you’re doing.” (Bret Victor) From pete at nomadlogic.org Sat Jun 1 05:07:22 2019 From: pete at nomadlogic.org (Pete Wright) Date: Fri, 31 May 2019 12:07:22 -0700 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> References: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> Message-ID: <78485efd-2cd3-a290-8142-48672bb847c5@nomadlogic.org> On 5/30/19 6:49 AM, David wrote: > I think it was BSD 4.1 that added quotas to the disk system, and I was just wondering if anyone ever used them, in academia or industry. As a user and an admin I never used this and, while I thought it was interesting, just figured that the users would sort it out amongst themselves. Which they mostly did. > > So, anyone ever use this feature? Lots of interesting insights/stories on this thread so figured i'd throw my hat in the ring and share a business anecdote... For quite a while i worked in the special effects/animation industry where fortunately (for me) unix has a long and interesting history. One secret abut the VFX world is that it's tremendously expensive with relatively little financial upside.  in my experience it is the studios who get most of the residual income from a blockbuster feature.  also a crew for a AAA feature requires lots of human power, computers and storage.  my shop frequently had 3-5 features in full production mode at the same time so we were redlining all of our systems 24/7. So aside from the cost of maintaining a large renderfarm, unix/linux 3d workstations, editing bays etc we also had an enormous NetApp footprint to support all these systems.  Now artists love creating lots and lots of high resolution images, and if they had their way there were be unlimited storage so they'd never have to archive a shot to tape in the event they need to reference it later.  But obviously that's not reasonable from a financial perspective. Our solution was to make heavy use of storage quotas in our environment, and then leverage quotas to provide per-department billing.  An individual user was given something like 1GB by default (here they had their mailbox, source-code, scripts etc), then the show they were booked on was then given an allocation of say 1TB of storage.  The show would then carve out this allocation in a per-shot basis.  This allowed us as an organization to actually keep pretty detailed records on our costs and unfortunately isn't something I've seen replicated well at lots of startups flush with cash these days. I was briefly on the team responsible for managing these quotas for a show and it was seriously an around the clock operation to keep our disks from filling up.  One of the tricks was to figure out how much space a rendered sequence of images would consume, factor in the time-to-render a frame and attempt to line up your backup jobs to free up enough space so the render nodes could write out the images to NFS. -pete -- Pete Wright pete at nomadlogic.org @nomadlogicLA From gtaylor at tnetconsulting.net Sat Jun 1 06:43:01 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Fri, 31 May 2019 14:43:01 -0600 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <78485efd-2cd3-a290-8142-48672bb847c5@nomadlogic.org> References: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> <78485efd-2cd3-a290-8142-48672bb847c5@nomadlogic.org> Message-ID: <8d220e00-c908-5e3c-48d1-927790385ca0@spamtrap.tnetconsulting.net> On 5/31/19 1:07 PM, Pete Wright wrote: > An individual user was given something like 1GB by default (here they > had their mailbox, source-code, scripts etc), then the show they were > booked on was then given an allocation of say 1TB of storage. It sounds like you are talking about group quotas in addition to each individual user's /user/ quota. Is that correct? Or was this more an imposed file system limit in lieu of group quotas? > I was briefly on the team responsible for managing these quotas for a > show and it was seriously an around the clock operation to keep our disks > from filling up. One of the tricks was to figure out how much space a > rendered sequence of images would consume, factor in the time-to-render > a frame and attempt to line up your backup jobs to free up enough space > so the render nodes could write out the images to NFS. Intriguing. Thank you for sharing. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From pete at nomadlogic.org Sat Jun 1 06:59:58 2019 From: pete at nomadlogic.org (Pete Wright) Date: Fri, 31 May 2019 13:59:58 -0700 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <8d220e00-c908-5e3c-48d1-927790385ca0@spamtrap.tnetconsulting.net> References: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> <78485efd-2cd3-a290-8142-48672bb847c5@nomadlogic.org> <8d220e00-c908-5e3c-48d1-927790385ca0@spamtrap.tnetconsulting.net> Message-ID: <37bb9b7a-518d-9d7f-5c63-ee12346885d2@nomadlogic.org> On 5/31/19 1:43 PM, Grant Taylor via TUHS wrote: > On 5/31/19 1:07 PM, Pete Wright wrote: >> An individual user was given something like 1GB by default (here they >> had their mailbox, source-code, scripts etc), then the show they were >> booked on was then given an allocation of say 1TB of storage. > > It sounds like you are talking about group quotas in addition to each > individual user's /user/ quota.  Is that correct?  Or was this more an > imposed file system limit in lieu of group quotas? it was a mix, $HOME was tied to a specific UID. for show data we leveraged per-volume quotas.  our directory structure was setup in such a way that a a path would be a series of symlinks pointing for specific NFS volumes.  so /shot/sequence/render for example could reference multiple volumes. /shot/sequence would live on a separate filer than "render".  we could also move around where "render" would live and not have to worry about creating orphan paths and such.  this was a common practice to manage hotspots, or for maint windows etc. the net result was we used a mix of volume quotas that netapp managed in addition to higher level shot/show quotas which were calculated out of band by some in-house services. -pete -- Pete Wright pete at nomadlogic.org @nomadlogicLA From reed at reedmedia.net Sat Jun 1 10:30:13 2019 From: reed at reedmedia.net (reed at reedmedia.net) Date: Fri, 31 May 2019 19:30:13 -0500 (CDT) Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> References: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> Message-ID: (Sharing some from my book in regards to at Berkeley slowly being written ... some questions below too.) % prior to March 19, 1976 \cite{unix-news-19760319} In addition to teaching and writing Unix Pascal, Thompson coded or advised on various other system modifications. He put in disk space quotas to prevent runaways.\cite{kenthompson1} % (inode.flags & 060000) == 020000 A special file named ``.q'' would track (in its inode) the maximum number of blocks that may be used by files in the directory and its descendents and a count of the number of used blocks. A new ``quot'' system call was added to make directories with quotas. In addition, a modified link(2) allowed quotas to be exceeded temporarily so a move/rename operation could work on a near full quota.\cite{unix-news-19760319} % NOTE: cptree source for system call use example % I cannot find this quot program A new quot command was used to define the quotas. % CITE: Later, Kurt Shoens, a student in Thompson's operating systems course\cite{arnold1}, wrote a tool called pq that would search up from your current working directory to find the nearest quota file and display what is used, the defined maximum quota, and its percentage used. % CITE: pq manpage and source Also a custom ls command identified if an entry was a quota file, file command could identify quota files, ex could warn if the ``Quota exceeded'', and cptree and lntree could copy quota files, This quotas implementation was not integrated back into Unix. So back to ``threatening messages of the day and personal letters.'' Also a different quot tool was added a couple years later in the Seventh Edition of Unix (and also shipped with later BSDs) to display (but not restrict) the disk usage for each user.\cite{ritchie-7th-edition-setting-up} (A new quotas implementation was written and introduced to Berkeley years later. This story is in \autoref{chapter:4BSD}.) --------------- At first I thought that a side-effect of quotas was that users couldn't chown files to others, but wrong since already is documented that chown is for super-user only in V6. Any thoughts on that? What is the unused pw_quota in v7 getpwent? Is that related at all to disk quotas. From dot at dotat.at Tue Jun 4 23:50:03 2019 From: dot at dotat.at (Tony Finch) Date: Tue, 4 Jun 2019 14:50:03 +0100 Subject: [TUHS] Quotas - did anyone ever use them? In-Reply-To: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> References: <975B93B6-AD7C-41B5-A14D-2DE4FEFAD3A6@kdbarto.org> Message-ID: A couple of Cambridge quota-related anecdotes: In its early days the Debian bug tracker was run out of Ian Jackson's personal account on our central Unix service. He had to delete bugs when they were closed to keep within his disk quota. Our mail software Exim has a two-level quota system: it handles quota errors from the OS as a hard limit, with some annoying portability hacks https://github.com/Exim/exim/blob/master/src/src/transports/appendfile.c#L1208 Exim it also has its own per-mailbox quota implementation. This can help control memory usage (mainly by Pine, back in the day...) as well as providing soft limit warnings and other bells and whistles. Tony. -- f.anthony.n.finch http://dotat.at/ Southeast Viking, North Utsire, South Utsire, Northeast Forties: Southerly 4 or 5, becoming cyclonic 5 to 7, perhaps gale 8 later. Slight or moderate, becoming moderate or rough later. Fog patches, thundery rain later. Moderate or good, occasionally very poor. From edouardklein at gmail.com Wed Jun 5 22:38:58 2019 From: edouardklein at gmail.com (Edouard Klein) Date: Wed, 05 Jun 2019 14:38:58 +0200 Subject: [TUHS] Scratch files in csh Message-ID: <87blzcmckd.fsf@plume.lan> Hi all, I saw this on https://old.reddit.com/r/unix : http://blog.snailtext.com/posts/no-itch-to-scratch.html It's about (the lack of) scratch files in csh. Maybe somebody here know what happened to the feature ? Cheers, Edouard. From arnold at skeeve.com Wed Jun 5 22:50:28 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 05 Jun 2019 06:50:28 -0600 Subject: [TUHS] Scratch files in csh In-Reply-To: <87blzcmckd.fsf@plume.lan> References: <87blzcmckd.fsf@plume.lan> Message-ID: <201906051250.x55CoSxK005467@freefriends.org> Edouard Klein wrote: > Hi all, > > I saw this on https://old.reddit.com/r/unix : > > http://blog.snailtext.com/posts/no-itch-to-scratch.html > > It's about (the lack of) scratch files in csh. Maybe somebody here know > what happened to the feature ? > > Cheers, > > Edouard. >From the phraseology in the paper ("The system will remove ....") it sounds to me like it was not a csh feature at all, but rather that the UCB systems had a cron job to run something like find / -name '#*' -mtime +7 -exec rm {} \; It's easy enough to research this in the archives, if you have the energy. :-) HTH, Arnold From clemc at ccc.com Wed Jun 5 23:31:14 2019 From: clemc at ccc.com (Clem Cole) Date: Wed, 5 Jun 2019 09:31:14 -0400 Subject: [TUHS] Scratch files in csh In-Reply-To: <201906051250.x55CoSxK005467@freefriends.org> References: <87blzcmckd.fsf@plume.lan> <201906051250.x55CoSxK005467@freefriends.org> Message-ID: Indeed - that's how UCB Systems worked. /tmp was a small scratch disk and anything there was suspect. Scratch files were not a CShell feature, they were a UNIX feature, very much needed on the 16-bit address PDP-11 where it was developed. The idea originally became popular with Dennis's C Compiler which used it for the intermediate files between the passes on the PDP-11. On a large public system like a University, /tmp would fill with cruft. It was traditionally removed on reboot. But that was not good enough for production systems that did not reboot. My memory is that there was a script that was similar to what Aharon suggested that ran in the early hours of the day, although before it ran it created a time_stamp_file with touch(1) set to be 6 hours previous so the script let anything under 6 hours survive using a negation on the -newer time_stamp_file clause. Clem On Wed, Jun 5, 2019 at 8:51 AM wrote: > Edouard Klein wrote: > > > Hi all, > > > > I saw this on https://old.reddit.com/r/unix : > > > > http://blog.snailtext.com/posts/no-itch-to-scratch.html > > > > It's about (the lack of) scratch files in csh. Maybe somebody here know > > what happened to the feature ? > > > > Cheers, > > > > Edouard. > > From the phraseology in the paper ("The system will remove ....") it sounds > to me like it was not a csh feature at all, but rather that the UCB > systems had a cron job to run something like > > find / -name '#*' -mtime +7 -exec rm {} \; > > It's easy enough to research this in the archives, if you have the energy. > :-) > > HTH, > > Arnold > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jun 5 23:34:27 2019 From: clemc at ccc.com (Clem Cole) Date: Wed, 5 Jun 2019 09:34:27 -0400 Subject: [TUHS] Scratch files in csh In-Reply-To: References: <87blzcmckd.fsf@plume.lan> <201906051250.x55CoSxK005467@freefriends.org> Message-ID: I should add, my memory is that the script was done that way before -mtime switch added; but its a tad fuzz -- many, many beers ago. ᐧ On Wed, Jun 5, 2019 at 9:31 AM Clem Cole wrote: > Indeed - that's how UCB Systems worked. /tmp was a small scratch disk and > anything there was suspect. Scratch files were not a CShell feature, they > were a UNIX feature, very much needed on the 16-bit address PDP-11 where it > was developed. > > The idea originally became popular with Dennis's C Compiler which used > it for the intermediate files between the passes on the PDP-11. On a > large public system like a University, /tmp would fill with cruft. It was > traditionally removed on reboot. But that was not good enough for > production systems that did not reboot. > > My memory is that there was a script that was similar to what Aharon > suggested that ran in the early hours of the day, although before it ran it > created a time_stamp_file with touch(1) set to be 6 hours previous so the > script let anything under 6 hours survive using a negation on the -newer > time_stamp_file clause. > > Clem > > On Wed, Jun 5, 2019 at 8:51 AM wrote: > >> Edouard Klein wrote: >> >> > Hi all, >> > >> > I saw this on https://old.reddit.com/r/unix : >> > >> > http://blog.snailtext.com/posts/no-itch-to-scratch.html >> > >> > It's about (the lack of) scratch files in csh. Maybe somebody here know >> > what happened to the feature ? >> > >> > Cheers, >> > >> > Edouard. >> >> From the phraseology in the paper ("The system will remove ....") it >> sounds >> to me like it was not a csh feature at all, but rather that the UCB >> systems had a cron job to run something like >> >> find / -name '#*' -mtime +7 -exec rm {} \; >> >> It's easy enough to research this in the archives, if you have the energy. >> :-) >> >> HTH, >> >> Arnold >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.paulsen at firemail.de Wed Jun 5 23:05:32 2019 From: thomas.paulsen at firemail.de (Thomas Paulsen) Date: Wed, 05 Jun 2019 15:05:32 +0200 Subject: [TUHS] Scratch files in csh In-Reply-To: <201906051250.x55CoSxK005467@freefriends.org> References: <87blzcmckd.fsf@plume.lan> <201906051250.x55CoSxK005467@freefriends.org> Message-ID: --- Ursprüngliche Nachricht --- Von: arnold at skeeve.com Datum: 05.06.2019 14:50:28 An: tuhs at tuhs.org, edouardklein at gmail.com Betreff: Re: [TUHS] Scratch files in csh > Edouard Klein wrote: > > > Hi all, > > > > I saw this on https://old.reddit.com/r/unix : > > > > http://blog.snailtext.com/posts/no-itch-to-scratch.html > > > > It's about (the lack of) scratch files in csh. Maybe somebody here know > > > what happened to the feature ? > > > > Cheers, > > > > Edouard. > > From the phraseology in the paper ("The system will remove ....") > it sounds > to me like it was not a csh feature at all, but rather that the UCB > systems had a cron job to run something like > > find / -name '#*' -mtime +7 -exec rm {} \; > > It's easy enough to research this in the archives, if you have the energy. > > :-) > > HTH, > > Arnold > From aksr at t-com.me Thu Jun 6 02:02:16 2019 From: aksr at t-com.me (aksr) Date: Wed, 5 Jun 2019 18:02:16 +0200 Subject: [TUHS] PAC (Perceptual audio coder) Message-ID: <20190605160216.GA6188@lap> Hi, Have anyone tried to get this open-sourced: https://en.wikipedia.org/wiki/Perceptual_audio_coder Regards, Alexander From aksr at t-com.me Thu Jun 6 02:29:20 2019 From: aksr at t-com.me (aksr) Date: Wed, 5 Jun 2019 18:29:20 +0200 Subject: [TUHS] PAC (Perceptual audio coder) In-Reply-To: <20190605160216.GA6188@lap> References: <20190605160216.GA6188@lap> Message-ID: <20190605162920.GA18318@lap> On Wed, Jun 05, 2019 at 06:02:16PM +0200, aksr wrote: > Have anyone tried to get this open-sourced: *Has anyone... From crossd at gmail.com Thu Jun 6 04:47:25 2019 From: crossd at gmail.com (Dan Cross) Date: Wed, 5 Jun 2019 14:47:25 -0400 Subject: [TUHS] PAC (Perceptual audio coder) In-Reply-To: <20190605162920.GA18318@lap> References: <20190605160216.GA6188@lap> <20190605162920.GA18318@lap> Message-ID: On Wed, Jun 5, 2019 at 12:29 PM aksr wrote: > On Wed, Jun 05, 2019 at 06:02:16PM +0200, aksr wrote: > > Have anyone tried to get this open-sourced: > > *Has anyone... > Not that I'm aware of, not that anyone would tell me, though I knew some people who used it. My understanding was that the most useful/interesting parts got wrapped up into MPEG-4. - Dan C. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robpike at gmail.com Thu Jun 6 08:14:38 2019 From: robpike at gmail.com (Rob Pike) Date: Thu, 6 Jun 2019 08:14:38 +1000 Subject: [TUHS] PAC (Perceptual audio coder) In-Reply-To: References: <20190605160216.GA6188@lap> <20190605162920.GA18318@lap> Message-ID: Ken Thompson and Sean Dorward worked hard on PAC to get it ready for release. Our plan was to fill up the rest of the Plan 9 release CD with several hundred meg of PAC audio. I gathered together music from a number of famous musicians (I won't name drop here but you'd recognize them all), much of it recorded just for us. We were going to release the source code for the decoder and, in a compromise for the business people trying to sell to the broadcasting industry (they eventually succeeded; digital FM broadcasting is derived from PAC), only 386 binaries for the encoder, at least for the initial release. At the time the encoder only ran about 1/4 real time on a PC. Then an AT&T lawyer stepped in at the last minute, was deeply offensive and rude to us, and shut down the effort for completely stupid and invalid reasons. I still bristle at the memory. What an asshole. PAC was so much clearer sounding that MP3. The world would have been a happier place. If only. -rob On Thu, Jun 6, 2019 at 4:57 AM Dan Cross wrote: > On Wed, Jun 5, 2019 at 12:29 PM aksr wrote: > >> On Wed, Jun 05, 2019 at 06:02:16PM +0200, aksr wrote: >> > Have anyone tried to get this open-sourced: >> >> *Has anyone... >> > > Not that I'm aware of, not that anyone would tell me, though I knew some > people who used it. My understanding was that the most useful/interesting > parts got wrapped up into MPEG-4. > > - Dan C. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyndon at orthanc.ca Sat Jun 8 06:14:14 2019 From: lyndon at orthanc.ca (Lyndon Nerenberg) Date: Fri, 07 Jun 2019 13:14:14 -0700 Subject: [TUHS] PAC (Perceptual audio coder) In-Reply-To: References: <20190605160216.GA6188@lap> <20190605162920.GA18318@lap> Message-ID: <40c8876c56404121@orthanc.ca> > Then an AT&T lawyer stepped in at the last minute, was deeply offensive and > rude to us, and shut down the effort for completely stupid and invalid > reasons. I still bristle at the memory. What an asshole. Be brave. Name names. Let him try to sue himself out of history. From jon at fourwinds.com Sat Jun 8 06:38:33 2019 From: jon at fourwinds.com (Jon Steinhart) Date: Fri, 07 Jun 2019 13:38:33 -0700 Subject: [TUHS] PAC (Perceptual audio coder) In-Reply-To: <40c8876c56404121@orthanc.ca> References: <20190605160216.GA6188@lap> <20190605162920.GA18318@lap> <40c8876c56404121@orthanc.ca> Message-ID: <201906072038.x57KcXPD029522@darkstar.fourwinds.com> Lyndon Nerenberg writes: > > Then an AT&T lawyer stepped in at the last minute, was deeply offensive and > > rude to us, and shut down the effort for completely stupid and invalid > > reasons. I still bristle at the memory. What an asshole. > > Be brave. Name names. Let him try to sue himself out of history. Ken talked about this at a conference that I attended a couple of decades ago. He had brought a whole pile of discs that he had made to give out before the idea was nixed by the attorney. Somehow audience members filched these discs while Ken was fiddling with his slides. No, I didn't end up getting one. So I don't know whether or not these included source code. Jon From clemc at ccc.com Mon Jun 10 08:32:36 2019 From: clemc at ccc.com (Clem Cole) Date: Sun, 9 Jun 2019 18:32:36 -0400 Subject: [TUHS] UNIX 50th at USENIX ATC Message-ID: Sorry for the long delay on this notice, but until this weekend there were still a few things to iron out before I made a broad announcement. First, I want to thank the wonderful folks at the Living Computers Museum and Labs who are set up to host an event at their museum for our members on the evening of July 10, which is during the week of USENIX ATC. To quote an email from their Curator, Aaron Alcorn: "*an easy-going members events with USENIX attendees as their special invited guests.*" As Aaron suggested, this event will just be computer people and computers, which seems fitting and a good match ;-) Our desire is to have as many of the old and new 'UNIX folks' at this event as possible and we can share stories of how our community got to where we are. Please spread the word, since we want to get as many people coming and sharing as we can. BTW: The Museum is hoping to have their refurbished PDP-7 running by that date. A couple of us on this list will be bringing a kit of SW in the hopes that we can boot Unix V0!! Second, USENIX BOD will provide us a room at ATC all week long to set up equipment and show off some things our community has done in the past. I have been in contact with some of you offline and will continue to do so. There should be some smaller historical systems that people will bring (plus connections to the LCM's systems via the Internet, of course) and there will be some RPi's running different emulators. I do hope that both the event and the computer room should be fun for all. Thanks, Clem Cole -------------- next part -------------- An HTML attachment was scrubbed... URL: From athornton at gmail.com Tue Jun 11 12:52:51 2019 From: athornton at gmail.com (Adam Thornton) Date: Mon, 10 Jun 2019 19:52:51 -0700 Subject: [TUHS] Question about finding curses to build on v7 Message-ID: I've been playing with simh recently, and there is a nonzero chance I will soon be acquiring a PDP/11-70. I realize I could run 2.11BSD on it, and as long as I stay away from a networking stack, I probably won't see too many coremap overflow errors. But I think I'd really rather run V7. However, there's one thing that makes it a less than ideal environment for me. I grew up after cursor-addressable terminals were a thing, and, even if I can eventually make "ed" do what I want, it isn't much fun. I've been an Emacs user since 1988 and my muscle memory isn't going to change soon (and failing being able to find and build Gosmacs or an early GNU Emacs, yes, I can get by in vi more easily than in ed; all those years playing Nethack poorly were good for something). So...where can I find a curses implementation (and really all I need in the termcap or terminfo layer is ANSI or VTxxx) that can be coerced into building on V7 pretty easily? Also, I think folks here might enjoy reading a little personal travelogue of some early Unix systems from my perspective (which is to say, a happy user of Unix for 30+ years but hardly ever near core development (I did do the DIAG 250 block driver for the zSeries port of OpenSolaris; then IBM pushed a little too hard on the price and Sun sold itself to (ugh) Oracle instead; the world would have been more fun if IBM had bought the company like we were betting on)). That's at https://athornton.dreamwidth.org/14340.html ; that in turn references a review I did about a year ago of The Unix Hater's Handbook, at https://athornton.dreamwidth.org/14272.html . Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From fair at netbsd.org Tue Jun 11 16:12:05 2019 From: fair at netbsd.org (Erik E. Fair) Date: Mon, 10 Jun 2019 23:12:05 -0700 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: Message-ID: <7247.1560233525@cesium.clock.org> Adam, The emacs you should search for is "Montgomery emacs" written by Warren Montgomery - it ran on PDP-11's under Unix. The first Unix system I had regular access to and learned on was the UCB Cory Hall PDP-11/70 running 2.8 BSD starting in Winter 1981, but before I learned vi, I'd learned Emacs on TOPS-20 on a DECsystem-20/60 at Stanford during summer school a few years prior, so any emacs was the quick way in for me. I switched because I got tired of having one finger on the CTRL key all day long ... IIRC, it was a stripped down version as compared to what I now know is "original" Emacs written in TECO macros - no "minibuffer" and some other stuff missing, but it had enough of the "right" keybindings that someone who knew emacs already could make it go. I'm still in touch with Ken Arnold (we were contemporaries at UCB), and he might be willing to help you make termcap & termlib go on V7 Unix. As I remember that code, it's not that big. It wouldn't surprise me if the NetBSD CVS repository for that code has the original versions. As for the Unix Hater's Handbook, a long stretch of the chapter on sendmail is an E-mail I sent to the RISKS digest after an E-mail disaster I had to manage at Apple ... I remember being surprised at seeing in the published book, until I saw the footnote directly quoting the permission I gave the authors to publish it. "Oh, yeah ..." Erik From ron at ronnatalie.com Tue Jun 11 22:19:47 2019 From: ron at ronnatalie.com (ron at ronnatalie.com) Date: Tue, 11 Jun 2019 08:19:47 -0400 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: <7247.1560233525@cesium.clock.org> References: <7247.1560233525@cesium.clock.org> Message-ID: <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> The other early "emacs" we ran before switching to gosmacs was JOVE--Jonathan's Own Version of Emacs. From clemc at ccc.com Tue Jun 11 23:43:42 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 09:43:42 -0400 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> References: <7247.1560233525@cesium.clock.org> <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> Message-ID: Two more thoughts... 1.) Zimmerman EMACS (a.k.a. CCA EMACS) ran on the PDP-11 originally when Steve wrote it at MIT. It's the closest to the original ITS/PDP-10 emacs of all the originals that I knew. I'm pretty sure he converted it to Pavel's freely available terminfo implementation at some point (when he was at Masscomp), but I think the original Zimmerman code has screwed down terminal support to a couple of terminals that were used at MIT. I've lost track of Steve, but I'll see if I can find you an email by reaching out on an Alumni list. 2.) I believe the first (joy created) termcap was in 2BSD but I don't think Arnold and Horton had started to pull the curses library out of vi yet. I think termcap itself had been but Mary Ann would be more authoritative than I. Check out the 2BSD, 3BSD, and 4BSD releases and look for the earliest versions. The C compiler is pretty much the same in all cases (the only issue I can think is that by 3BSD folks at UCB had removed dmr's 7 character variable limit), but I think curses should compile without too much issue on a virgin dmr V7 compiler. ᐧ On Tue, Jun 11, 2019 at 8:20 AM wrote: > The other early "emacs" we ran before switching to gosmacs was > JOVE--Jonathan's Own Version of Emacs. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patbarron at acm.org Tue Jun 11 23:59:49 2019 From: patbarron at acm.org (Pat Barron) Date: Tue, 11 Jun 2019 09:59:49 -0400 (EDT) Subject: [TUHS] Montgomery's emacs Message-ID: I'm reminded since Erik brought this up... Is Warren Montgomery's emacs available, like, anywhere... I used it long ago on V7m, and I had it on my AT&T 7300 (where it was available as a binary package). It's the first emacs I ever used. I don't recall where we got it for the PDP-11. On our system, we had it permission-restricted so only certain trusted users could use it - basically, people who could be trusted not to be in it all the time, and not to use it while the system was busy. We had an 11/40 with 128K, and 2 or 3 people trying to use Mongomery emacs would basically crush the system... In the absence of that, I've always found JOVE to be the next best thing, as far as being lightweight and sufficently emacs-like. I actually install it on almost all of my Linux systems. Did JOVE ever run on V7? --Pat. From stewart at serissa.com Wed Jun 12 01:02:49 2019 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 11 Jun 2019 11:02:49 -0400 Subject: [TUHS] Old Emacs In-Reply-To: References: Message-ID: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> I have a copy of the sources for Dave Conroy’s microemacs, if there’s any interest. It is certainly the smallest one I know about. I suppose it was quite late to the emacs party, dating from 1989 or so. The sources include support for Ultrix and various mini and micro systems, plus a few terminal types. I used to use to use it on small and partially installed systems for editing config files. This role seems to be taken by nano in the modern day. I asked him once how to change the key bindings and Dave said “You use the Change Configuration command.” “On Unix it is abbreviated as cc.” -L From clemc at ccc.com Wed Jun 12 01:15:32 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 11:15:32 -0400 Subject: [TUHS] Old Emacs In-Reply-To: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> References: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> Message-ID: On Tue, Jun 11, 2019 at 11:11 AM Lawrence Stewart wrote: > I asked him once how to change the key bindings and Dave said “You use the > Change Configuration command.” “On Unix it is abbreviated as cc.” +1 a true hackers answer. ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Wed Jun 12 01:22:15 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 11 Jun 2019 11:22:15 -0400 (EDT) Subject: [TUHS] Montgomery's emacs Message-ID: <20190611152215.B39B818C09A@mercury.lcs.mit.edu> > From: Pat Barron > Is Warren Montgomery's emacs available, like, anywhere... I've got a copy on the dump of the MIT PWB system. I'm actually supposed to resurrect it for someone, IIRC, (the MIT system was .. idiosyncratic, so it'll take a bit of tweaking), but haven't gotten to it yet. Does anyone else have the source, or is mine the only one left? Noel From mah at mhorton.net Wed Jun 12 01:48:45 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Tue, 11 Jun 2019 08:48:45 -0700 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: References: <7247.1560233525@cesium.clock.org> <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> Message-ID: <9df3d45e-b817-2c17-2ec1-2f7d22e71a37@mhorton.net> Termcap and termlib from 2BSD should work fine on standard V6/V7 - that's what they were originally written for.  You don't need curses for vi or emacs, they have their own comparable code internally.  In fact, the original 2BSD curses from Ken Arnold was basically the vi code pulled out into a separate library. Warren Montgomery's emacs was internal to Bell Labs and intended for Bell Labs versions of PDP-11 UNIX, not 2BSD, although I recall it was often ported to the Vax. I can't recall which version of UNIX they ran in the various Computer Centers in the early 1980s when this happened, but I doubt it was V7; probably PWB or UNIX/TS. It would have had the Ritchie C compiler. I can't recall if it used termcap or had the terminals hardcoded - apparently both, according to this: https://tech-insider.org/unix/research/1983/0119.html     Mary Ann On 6/11/19 6:43 AM, Clem Cole wrote: > Two more thoughts... > > 1.) Zimmerman EMACS (a.k.a. CCA EMACS) ran on the PDP-11 originally > when Steve wrote it at MIT.  It's the closest to the original > ITS/PDP-10 emacs of all the originals that I knew.    I'm pretty sure > he converted it to Pavel's freely available terminfo implementation at > some point (when he was at Masscomp), but I think the original > Zimmerman code has screwed down terminal support to a couple of > terminals that were used at MIT.   I've lost track of Steve, but I'll > see if I can find you an email by reaching out on an Alumni list. > > 2.) I believe the first (joy created) termcap was in 2BSD but I don't > think Arnold and Horton had started to pull the curses library out of > vi yet.  I think termcap itself had been but Mary Ann would be more > authoritative than I.  Check out the 2BSD, 3BSD, and 4BSD releases and > look for the earliest versions.   The C compiler is pretty much the > same in all cases (the only issue I can think is that by 3BSD folks at > UCB had removed dmr's 7 character variable limit), but I think curses > should compile without too much issue on a virgin dmr V7 compiler. > ᐧ > > On Tue, Jun 11, 2019 at 8:20 AM > wrote: > > The other early "emacs" we ran before switching to gosmacs was > JOVE--Jonathan's Own Version of Emacs. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cbbrowne at gmail.com Wed Jun 12 01:52:57 2019 From: cbbrowne at gmail.com (Christopher Browne) Date: Tue, 11 Jun 2019 11:52:57 -0400 Subject: [TUHS] Old Emacs In-Reply-To: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> References: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> Message-ID: On Tue, 11 Jun 2019 at 11:11, Lawrence Stewart wrote: > I have a copy of the sources for Dave Conroy’s microemacs, if there’s any > interest. > It is certainly the smallest one I know about. > > I suppose it was quite late to the emacs party, dating from 1989 or so. > The sources include support for Ultrix and various mini and micro systems, > plus a few terminal types. > > There's some pretty decent discussion of forks of this here... https://www.emacswiki.org/emacs/MicroEmacs Perhaps also see... http://texteditors.org/cgi-bin/wiki.pl?MicroEmacs I see the torvalds "fork"; it looks like it gets a patch every year or so. https://github.com/torvalds/uemacs By the way, JOVE is still maintained, albeit not super actively. http://www.cs.toronto.edu/pub/hugh/jove-dev/ > I used to use to use it on small and partially installed systems for > editing config files. This role seems to be taken by nano in the modern > day. > > I asked him once how to change the key bindings and Dave said “You use the > Change Configuration command.” “On Unix it is abbreviated as cc.” Love it!!! I liked that about the configuration of wmx (a window manger), although less enthralled at the "change configuration command" being "g++" -- When confronted by a difficult problem, solve it by reducing it to the question, "How would the Lone Ranger handle this?" -------------- next part -------------- An HTML attachment was scrubbed... URL: From mah at mhorton.net Wed Jun 12 01:55:11 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Tue, 11 Jun 2019 08:55:11 -0700 Subject: [TUHS] Montgomery's emacs In-Reply-To: <20190611152215.B39B818C09A@mercury.lcs.mit.edu> References: <20190611152215.B39B818C09A@mercury.lcs.mit.edu> Message-ID: Warren's emacs would have been part of the Bell Labs 'exptools' (experimental tools) package, which was an internally distributed package of 3rd party software that wasn't part of the standard UNIX distributions at the time.  vi/termcap/termlib was also part of exptools. If exptools isn't preserved anywhere, it would be worthwhile to try to find it.  Noel - it's possible that's what you have. I can't find it anywhere else easily.     Mary Ann On 6/11/19 8:22 AM, Noel Chiappa wrote: > > From: Pat Barron > > > Is Warren Montgomery's emacs available, like, anywhere... > > I've got a copy on the dump of the MIT PWB system. I'm actually supposed to > resurrect it for someone, IIRC, (the MIT system was .. idiosyncratic, so it'll > take a bit of tweaking), but haven't gotten to it yet. > > Does anyone else have the source, or is mine the only one left? > > Noel From jnc at mercury.lcs.mit.edu Wed Jun 12 03:02:54 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Tue, 11 Jun 2019 13:02:54 -0400 (EDT) Subject: [TUHS] Montgomery's emacs Message-ID: <20190611170254.2DB4018C09A@mercury.lcs.mit.edu> > From: Mary Ann Horton > Warren's emacs would have been part of the Bell Labs 'exptools' > (experimental tools) package ... it's possible that's what you have. I don't think so; Warren had been a grad student in our group, and we got it on that basis. I'm pretty sure we didn't have termcap or any of that stuff. Noel From lars at nocrew.org Wed Jun 12 03:12:44 2019 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 11 Jun 2019 17:12:44 +0000 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: (Clem Cole's message of "Tue, 11 Jun 2019 09:43:42 -0400") References: <7247.1560233525@cesium.clock.org> <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> Message-ID: <7wzhmoqc4z.fsf@junk.nocrew.org> Clem Cole wrote: > 1.) Zimmerman EMACS (a.k.a. CCA EMACS) ran on the PDP-11 originally > when Steve wrote it at MIT. I have this on the origin of Montgomery and Zimmerman Emacs: "[Montgomery's] emacs implementation was begun in 1979, after having left MIT. I made it freely available to people INSIDE of Bell Labs, and it was widely used. It was never officially "released" from Bell Labs." "Unfortunately, several copies did get out during that time, mainly due to people who left Bell Labs to return to school or gave copies to friends. When Zimmerman modified one of those copies as the original basis for CCA emacs, AT&T and CCA had a prolonged debate over it. Eventually the matter was resolved when Zimmerman replaced the last of my code" https://github.com/larsbrinkhoff/emacs-history/blob/sources/Usenet/net.emacs/btl-emacs-2.txt From clemc at ccc.com Wed Jun 12 03:12:51 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 13:12:51 -0400 Subject: [TUHS] Montgomery's emacs In-Reply-To: <20190611170254.2DB4018C09A@mercury.lcs.mit.edu> References: <20190611170254.2DB4018C09A@mercury.lcs.mit.edu> Message-ID: I thought much of the exptools went into something whos name was like the AT&T Unix Toolkit Library (that Summit maintained). It was subscription oriented (you paid per tool, but had an unlimited license for it). This was how Korn Shell for $2K and a few other things made it out of Bell - I think that eventually, ditroff was moved there instead of being a separate distribution. I've now forgotten many of the details - there was a build/make replacement IIRC that was there also, many of the Jerq tools and games like GBACA and some others were in there. Thinking about it much of the support for Jerq (68000) and Teletype version (BLIT/We32000) may have been in the Toolkit library. ᐧ On Tue, Jun 11, 2019 at 1:03 PM Noel Chiappa wrote: > > From: Mary Ann Horton > > > Warren's emacs would have been part of the Bell Labs 'exptools' > > (experimental tools) package ... it's possible that's what you have. > > I don't think so; Warren had been a grad student in our group, and we got > it > on that basis. I'm pretty sure we didn't have termcap or any of that stuff. > > Noel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kix at kix.es Wed Jun 12 03:08:31 2019 From: kix at kix.es (=?UTF-8?Q?Rodolfo_Garc=C3=ADa_Pe=C3=B1as_=28kix=29?=) Date: Tue, 11 Jun 2019 17:08:31 +0000 Subject: [TUHS] Old Emacs In-Reply-To: References: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> Message-ID: ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On Tuesday, 11 de June de 2019 17:15, Clem Cole wrote: > On Tue, Jun 11, 2019 at 11:11 AM Lawrence Stewart wrote: > >> I asked him once how to change the key bindings and Dave said “You use the Change Configuration command.” “On Unix it is abbreviated as cc.” > > +1 a true hackers answer. > ᐧ Nice email signature quote :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.bowling at kev009.com Wed Jun 12 03:25:30 2019 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Tue, 11 Jun 2019 10:25:30 -0700 Subject: [TUHS] UNIX 50th at USENIX ATC In-Reply-To: References: Message-ID: The conference looks supremely uninteresting outside one WAFL talk to me. Is there a way to participate without attending Usenix ATC? On Sun, Jun 9, 2019 at 3:33 PM Clem Cole wrote: > Sorry for the long delay on this notice, but until this weekend there were > still a few things to iron out before I made a broad announcement. > > > > First, I want to thank the wonderful folks at the Living Computers Museum > and Labs who are set up to host an event > at their museum for our members on the evening of July 10, which is during > the week of USENIX ATC. To quote an email from their Curator, Aaron > Alcorn: "*an easy-going members events with USENIX attendees as their > special invited guests.*" As Aaron suggested, this event will just be > computer people and computers, which seems fitting and a good match ;-) > > > > Our desire is to have as many of the old and new 'UNIX folks' at this > event as possible and we can share stories of how our community got to > where we are. Please spread the word, since we want to get as many people > coming and sharing as we can. BTW: The Museum is hoping to have their > refurbished PDP-7 running by that date. A couple of us on this list will > be bringing a kit of SW in the hopes that we can boot Unix V0!! > > > > Second, USENIX BOD will provide us a room at ATC all week long to set up > equipment and show off some things our community has done in the past. I > have been in contact with some of you offline and will continue to do so. > There should be some smaller historical systems that people will bring > (plus connections to the LCM's systems via the Internet, of course) and > there will be some RPi's running different emulators. > > > > I do hope that both the event and the computer room should be fun for all. > > > > Thanks, > > Clem Cole > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Wed Jun 12 03:26:44 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 13:26:44 -0400 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: <7wzhmoqc4z.fsf@junk.nocrew.org> References: <7247.1560233525@cesium.clock.org> <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> <7wzhmoqc4z.fsf@junk.nocrew.org> Message-ID: Interesting and that sounds quite plausible. CCA sold it at one point. Masscomp (because Steve was working for us) got a license and a redistribution license. IIRC: we could redistribute the binary for free as long as CCA got Steve's changes back. Steve definitely did the terminfo/lib work for CCA Emacs at Masscomp, as I had pointed out that AT&T was moving to terminfo but was locking it up inside of the System V (AT&T 'consider it standard' stuff - much to a number of their own people telling them not too). Pavel ?? Curtis I think ?? - I've forgotten his last name - had written a new uncontaminated version at Cornell that was a functional replacement and that could read the AT&T ASCII database and compile them properly. (I don't remember if Pavel's version could take the AT&T binary versions). I had obtained Pavel's version and we were shipping that as our terminfo/lib implementation on the Masscomp boxes and were switching our code to use it, as we had not yet signed a System V license and were shipping on a System III based one. Steve started to include Pavel's library in the CCA version, which he got from me. ᐧ On Tue, Jun 11, 2019 at 1:12 PM Lars Brinkhoff wrote: > Clem Cole wrote: > > 1.) Zimmerman EMACS (a.k.a. CCA EMACS) ran on the PDP-11 originally > > when Steve wrote it at MIT. > > I have this on the origin of Montgomery and Zimmerman Emacs: > > "[Montgomery's] emacs implementation was begun in 1979, after having > left MIT. I made it freely available to people INSIDE of Bell Labs, > and it was widely used. It was never officially "released" from Bell > Labs." > > "Unfortunately, several copies did get out during that time, mainly > due to people who left Bell Labs to return to school or gave copies to > friends. When Zimmerman modified one of those copies as the original > basis for CCA emacs, AT&T and CCA had a prolonged debate over it. > Eventually the matter was resolved when Zimmerman replaced the last of > my code" > > > https://github.com/larsbrinkhoff/emacs-history/blob/sources/Usenet/net.emacs/btl-emacs-2.txt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at nocrew.org Wed Jun 12 03:05:39 2019 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 11 Jun 2019 17:05:39 +0000 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: (Adam Thornton's message of "Mon, 10 Jun 2019 19:52:51 -0700") References: Message-ID: <7w8su8rr18.fsf@junk.nocrew.org> Adam Thornton wrote: > I've been an Emacs user since 1988 and my muscle memory isn't going to > change soon (and failing being able to find and build Gosmacs or an > early GNU Emacs I have Gosling Emacs and early versions of GNU Emacs here: https://github.com/larsbrinkhoff/emacs-history From brad at anduin.eldar.org Wed Jun 12 03:28:03 2019 From: brad at anduin.eldar.org (Brad Spencer) Date: Tue, 11 Jun 2019 13:28:03 -0400 Subject: [TUHS] Montgomery's emacs In-Reply-To: (message from Clem Cole on Tue, 11 Jun 2019 13:12:51 -0400) Message-ID: Clem Cole writes: > I thought much of the exptools went into something whos name was like the > AT&T Unix Toolkit Library (that Summit maintained). It was subscription > oriented (you paid per tool, but had an unlimited license for it). This > was how Korn Shell for $2K and a few other things made it out of Bell - I > think that eventually, ditroff was moved there instead of being a separate > distribution. I've now forgotten many of the details - there was a > build/make replacement IIRC that was there also, many of the Jerq tools and > games like GBACA and some others were in there. Thinking about it much of > the support for Jerq (68000) and Teletype version (BLIT/We32000) may have > been in the Toolkit library. > ᐧ > nmake ?? I think it may have been called. I touched something that matches what is being called "exptools" and the like when I was at 6200 Broad St. We used nmake and ksh extensively in the software project I was a part of, and I know I had access to the source for an ancient version of nmake at one point. And I remember the subscription thing too and I seem to recall you had to pay per architecture at least by the time I was exposed to it. I got the ancient nmake version compiled on HP-UX 10.x to get part of the product I was a part of building on HP-UX 10.x. The official HP-UX 10.x version from the subscription service was expensive, as I remember things. There was at least one person in the group who used a version of emacs that was from the same, or related, source. I never used it, as I preferred GNU emacs. -- Brad Spencer - brad at anduin.eldar.org - KC8VKS - http://anduin.eldar.org From lars at nocrew.org Wed Jun 12 03:28:04 2019 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 11 Jun 2019 17:28:04 +0000 Subject: [TUHS] Montgomery's emacs In-Reply-To: <20190611152215.B39B818C09A@mercury.lcs.mit.edu> (Noel Chiappa's message of "Tue, 11 Jun 2019 11:22:15 -0400 (EDT)") References: <20190611152215.B39B818C09A@mercury.lcs.mit.edu> Message-ID: <7wimtc58wr.fsf@junk.nocrew.org> Noel Chiappa writes: > > Is Warren Montgomery's emacs available, like, anywhere... > > I've got a copy on the dump of the MIT PWB system. [...] Does anyone > else have the source, or is mine the only one left? I have been looking, and all I got so far is something from http://unixpc.taronga.com/STORE/, and some floppy disk images for the Unix PC. Probably binary only. From lars at nocrew.org Wed Jun 12 03:08:36 2019 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 11 Jun 2019 17:08:36 +0000 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: <7247.1560233525@cesium.clock.org> (Erik E. Fair's message of "Mon, 10 Jun 2019 23:12:05 -0700") References: <7247.1560233525@cesium.clock.org> Message-ID: <7w4l4wrqwb.fsf@junk.nocrew.org> Erik E. Fair wrote: > The emacs you should search for is "Montgomery emacs" written by > Warren Montgomery - it ran on PDP-11's under Unix. It's also known as BTL Emacs, AT&T Emacs, Unix PC Emacs, and Toolchest Emacs. I have some later versions, but I'm not sure they'll run on a PDP-11. From stewart at serissa.com Wed Jun 12 03:53:46 2019 From: stewart at serissa.com (Lawrence Stewart) Date: Tue, 11 Jun 2019 13:53:46 -0400 Subject: [TUHS] Old Emacs In-Reply-To: <7wmuio593y.fsf@junk.nocrew.org> References: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> <7wmuio593y.fsf@junk.nocrew.org> Message-ID: > On 2019, Jun 11, at 1:23 PM, Lars Brinkhoff wrote: > > Lawrence Stewart wrote: >> I have a copy of the sources for Dave Conroy’s microemacs, if there’s >> any interest. > > I got version 30 from Conroy, from 1986 by his estimate. If yours > are older, I'm interested. It is hard to tell, I have about 20 copies, on backups of backups of backups. I’ll see if I can untangle them. We can always ask Dave too. He’s in Half Moon Bay, CA these days. The log file from one of mine goes from V1 on 1-Jan-85 to 28-Sep-87 so likely yours is older. V30 is listed as 14-Apr-86 -L From clemc at ccc.com Wed Jun 12 03:55:42 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 13:55:42 -0400 Subject: [TUHS] Montgomery's emacs In-Reply-To: References: <20190611170254.2DB4018C09A@mercury.lcs.mit.edu> Message-ID: Reading Lar's emacs thread refilled the memory cache: s/Toolkit/ToolChest/ ᐧ On Tue, Jun 11, 2019 at 1:12 PM Clem Cole wrote: > I thought much of the exptools went into something whos name was like the > AT&T Unix Toolkit Library (that Summit maintained). It was subscription > oriented (you paid per tool, but had an unlimited license for it). This > was how Korn Shell for $2K and a few other things made it out of Bell - I > think that eventually, ditroff was moved there instead of being a separate > distribution. I've now forgotten many of the details - there was a > build/make replacement IIRC that was there also, many of the Jerq tools and > games like GBACA and some others were in there. Thinking about it much of > the support for Jerq (68000) and Teletype version (BLIT/We32000) may have > been in the Toolkit library. > ᐧ > > On Tue, Jun 11, 2019 at 1:03 PM Noel Chiappa > wrote: > >> > From: Mary Ann Horton >> >> > Warren's emacs would have been part of the Bell Labs 'exptools' >> > (experimental tools) package ... it's possible that's what you have. >> >> I don't think so; Warren had been a grad student in our group, and we got >> it >> on that basis. I'm pretty sure we didn't have termcap or any of that >> stuff. >> >> Noel >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lars at nocrew.org Wed Jun 12 03:23:45 2019 From: lars at nocrew.org (Lars Brinkhoff) Date: Tue, 11 Jun 2019 17:23:45 +0000 Subject: [TUHS] Old Emacs In-Reply-To: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> (Lawrence Stewart's message of "Tue, 11 Jun 2019 11:02:49 -0400") References: <636F7ECF-AC9A-49FC-BB7E-9AC8DB47B9F0@serissa.com> Message-ID: <7wmuio593y.fsf@junk.nocrew.org> Lawrence Stewart wrote: > I have a copy of the sources for Dave Conroy’s microemacs, if there’s > any interest. I got version 30 from Conroy, from 1986 by his estimate. If yours are older, I'm interested. From mah at mhorton.net Wed Jun 12 04:05:17 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Tue, 11 Jun 2019 11:05:17 -0700 Subject: [TUHS] Question about finding curses to build on v7 In-Reply-To: References: <7247.1560233525@cesium.clock.org> <169b01d5204f$f633c1f0$e29b45d0$@ronnatalie.com> <7wzhmoqc4z.fsf@junk.nocrew.org> Message-ID: <6021078c-f1a3-8f09-8c5e-5bb87e1edbbe@mhorton.net> Most of what was produced internally to AT&T had to stay there, because lawyers. I was at Bell Labs by the time I changed termcap/termlib and the Arnold curses into "The New Curses and Terminfo", which I presented at Usenix in Boston 1982.  Terminfo was "compiled", and Curses had a new algorithm to use insert/delete line/char to avoid having to redraw the whole screen. I wasn't allowed to distribute it outside AT&T. Pavel Curtis of CMU stepped up and, at my encouragement, volunteered to rewrite it to the same spec. I worked with him on the spec and the algorithm, and his version was available to open source. If you were at the Boston conference, you may recall my presentation. My Director, Tony Cuilwik, was in the audience, and this was my first public talk since joining Bell Labs, so I was nervous. As I was stepping to the podium to begin my talk, Armando Stettner interrupted to present me with the "Flying Rubber Chicken Award". Someone offstage threw him a rubber chicken. The chicken was quickly vanished and replaced with the real award, "The Term Cap". Armando explained that the hat was an Bell System hard hat, donated by Ken Thompson himself. Scotched to the hat were "hacker eyes" (googly eye glasses) and a Steve Martin style arrow-through-the-head "for the term info to go in and come out". He left me there, holding the award, as I had to reboot my brain to begin my talk. I still have that award. It graced my workplace for many years. When I worked at Bank One in Columbus, I put it on a styrofoam head on top of my cube. A coworker had contributed a yellow cheerleader pompom which gave her hair. When Chase bought Bank One, there were Chase big shots coming through our building, I was told to take it down because it didn't look "professional". I was offended - "that's an award!" It stayed down for several months, and people complained because, in that cube farm of identical rows of cubes, "people used that for navigation". I made a little plaque explaining the award and placed it next to the restored Term Cap on my cube. The award sat on my cube at SDG&E for 11 years without incident, and now that I'm retired I proudly display it on my piano at home.     Mary Ann On 6/11/19 10:26 AM, Clem Cole wrote: > Interesting and that sounds quite plausible.   CCA sold it at one > point. Masscomp (because Steve was working for us) got a license and a > redistribution license.   IIRC: we could redistribute the binary for > free as long as CCA got Steve's changes back. > > Steve definitely did the terminfo/lib work for CCA Emacs at Masscomp, > as I had pointed out that AT&T was moving to terminfo but was locking > it up inside of the System V (AT&T 'consider it standard' stuff - much > to a number of their own people telling them not too).   Pavel ?? > Curtis I think ?? - I've forgotten his last name -  had written a new > uncontaminated version at Cornell that was a functional replacement > and that could read the AT&T ASCII database and compile them > properly.   (I don't remember if Pavel's version could take the AT&T > binary versions).  I had obtained Pavel's version and we were shipping > that as our terminfo/lib implementation on the Masscomp boxes and were > switching our code to use it, as we had not yet signed a System V > license and were shipping on a System III based one.   Steve started > to include Pavel's library in the CCA version, which he got from me. > ᐧ > > On Tue, Jun 11, 2019 at 1:12 PM Lars Brinkhoff > wrote: > > Clem Cole wrote: > > 1.) Zimmerman EMACS (a.k.a. CCA EMACS) ran on the PDP-11 originally > > when Steve wrote it at MIT. > > I have this on the origin of Montgomery and Zimmerman Emacs: > >   "[Montgomery's] emacs implementation was begun in 1979, after having >   left MIT.  I made it freely available to people INSIDE of Bell Labs, >   and it was widely used. It was never officially "released" from Bell >   Labs." > >   "Unfortunately, several copies did get out during that time, mainly >   due to people who left Bell Labs to return to school or gave > copies to >   friends.  When Zimmerman modified one of those copies as the > original >   basis for CCA emacs, AT&T and CCA had a prolonged debate over it. >   Eventually the matter was resolved when Zimmerman replaced the > last of >   my code" > > https://github.com/larsbrinkhoff/emacs-history/blob/sources/Usenet/net.emacs/btl-emacs-2.txt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From norman at oclsc.org Wed Jun 12 04:45:32 2019 From: norman at oclsc.org (Norman Wilson) Date: Tue, 11 Jun 2019 14:45:32 -0400 Subject: [TUHS] UNIX 50th at USENIX ATC Message-ID: <1560278736.16251.for-standards-violators@oclsc.org> Kevin Bowling: The conference looks supremely uninteresting outside one WAFL talk to me. ==== That is, of course, a matter of opinion. Just from skimming titles I see about two dozen talks of at least some interest to me in the ATC program. And that's just ATC; I'm planning to attend the Hot* workshops on Monday and Tuesday as well. Of course I won't attend every one of those talks--some coincide in time, some I'll miss because I get stuck in the hallway track. And some will prove less interesting in practice, though others that don't seem all that interesting in the program will likely prove much better in person. I've been attending USENIX ATC for decades, and although some conferences have been meatier than others, I've never ended up feeling the trip was a waste of time. Perhaps us old farts just aren't as discriminating as you youngsters. That said, I think Kevin's question Is there a way to participate [on the UNIX50 event] without attending Usenix ATC? is a good one. Norman Wilson Toronto ON From clemc at ccc.com Wed Jun 12 07:31:30 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 11 Jun 2019 17:31:30 -0400 Subject: [TUHS] Celebration of Internet History with their UNIX at 50 Event In-Reply-To: References: Message-ID: We are all thrilled and thankful for the generosity of SDF and LCM+L by sponsoring and providing a celebration of Internet History with their UNIX at 50 Event for the USENIX ATC Attendees. We understand not all of you can participate in the conference, but would still like to be part of the celebration. Our hosts have graciously opened the event to the community at large, as I said in my previous message, it should be an evening of computer people being able to be around and discussing computer history. However, if you are not planning to attend the conference but wish to attend the evening's event, we wish that you would at least consider joining one or more of the organizations to help support them all in the future. All three organizations are members supported and need all our help and contributions to function and bring their services to everyone today and hopefully 50 years from now. Membership details for each can be found at Join SDF , LCM+L Memberships , and USENIX Association Memberships ᐧ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsteve at superglobalmegacorp.com Wed Jun 12 16:36:10 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Wed, 12 Jun 2019 14:36:10 +0800 Subject: [TUHS] MtXinu floppy set for Mach386 Message-ID: <74c33a7d-4fe2-436c-a0dd-cfc9a5cb4926@PU1APC01FT006.eop-APC01.prod.protection.outlook.com> I came across Scott Taylor’s site which mentions his adventure with MtXinu (https://www.retrosys.net/)! I had asked a few years ago (February 2017?) about locating a set Of this to no avail, but thanks to Scott the binary set is now available. ftp://ftp.mrynet.com/operatingsystems/Mach2.5/MtXinu-binary-dist/floppies/MB920331020/ There is some additional documentation to be found here. ftp://ftp.mrynet.com/operatingsystems/Mach2.5/MtXinu-binary-dist/docs The floppy drive like 386BSD is super weak and I had no luck with Qemu. VMWare worked fine to install. The VMDK will run on Qemu as low as 0.90 just fine. I haven’t tried the networking at all, so I don’t know about adapters/protocol support. I’ve been using a serial line to uuencode stuff in & out but it’s been stable. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Thu Jun 13 01:31:09 2019 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Wed, 12 Jun 2019 11:31:09 -0400 Subject: [TUHS] RIP Bud Lawson Message-ID: <201906121531.x5CFV9rM111616@tahoe.cs.Dartmouth.EDU> Bud Lawson, long an expat living in Sweden, died yesterday. Not a Unix person, he was, however, the originator of a characteristic Unix programmer's idiom. Using an idea adapted from Ken Knowlton, Bud invented the pointer- chasing arrow operator that Dennis Ritchie adopted for C. I played matchmaker. When Bud first proposed the "based storage" (pointer) facility for PL/I, he used the well-established field(pointer) notation. I introduced him to the pointer-chasing notation Knowlton devised for L6. Knowlton, however, had no operator because he had only single-letter identifiers. What we now write as a->b->c, Knowlton wrote as abc. Appreciating the absence of parentheses, Bud came up with the wonderfully intuitive pointer->field notation. Doug From ken at google.com Fri Jun 14 03:57:10 2019 From: ken at google.com (Ken Thompson) Date: Thu, 13 Jun 2019 10:57:10 -0700 Subject: [TUHS] UNIX 50th at USENIX ATC In-Reply-To: <1560278736.16251.for-standards-violators@oclsc.org> References: <1560278736.16251.for-standards-violators@oclsc.org> Message-ID: sorry all, i cant make it. On Tue, Jun 11, 2019 at 11:55 AM Norman Wilson wrote: > > Kevin Bowling: > > The conference looks supremely uninteresting outside one WAFL talk to me. > > ==== > > That is, of course, a matter of opinion. Just from skimming > titles I see about two dozen talks of at least some interest > to me in the ATC program. And that's just ATC; I'm planning > to attend the Hot* workshops on Monday and Tuesday as well. > > Of course I won't attend every one of those talks--some coincide > in time, some I'll miss because I get stuck in the hallway track. > And some will prove less interesting in practice, though others > that don't seem all that interesting in the program will likely > prove much better in person. > > I've been attending USENIX ATC for decades, and although some > conferences have been meatier than others, I've never ended up > feeling the trip was a waste of time. > > Perhaps us old farts just aren't as discriminating as you > youngsters. > > That said, I think Kevin's question > > Is there a way to participate [on the UNIX50 event] without attending Usenix ATC? > > is a good one. > > Norman Wilson > Toronto ON From jnc at mercury.lcs.mit.edu Sun Jun 23 04:17:19 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sat, 22 Jun 2019 14:17:19 -0400 (EDT) Subject: [TUHS] Any oldtimers remember anything about the KS11 on the -11/20? Message-ID: <20190622181719.E10E918C0B4@mercury.lcs.mit.edu> This is an appeal to the few really-old-timers (i.e. who used the PDP-11/20 version of Unix) on the list to see if they remember _anything_ of the KS11 memory mapping unit used on that machine. Next to nothing is known of the KS11. Dennis' page "Odd Comments and Strange Doings in Unix": https://www.bell-labs.com/usr/dmr/www/odd.html has a story involving it (at the end), and that is all I've ever been able to find out about it. I don't expect documentation, but I am hoping someone will remember _basically_ what it did. My original guess as to its functionality, from that page, was that it's not part of the CPU, but a UNIBUS device, placed between the UNIBUS leaving the CPU, and the rest of the the bus, which perhaps mapped addresses around (and definitely limited user access to I/O page addresses). It might also have mapped part of the UNIBUS space which the -11/20 CPU _can_ see (i.e. in the 0-56KB range) up to UNIBUS higher addresses, where 'extra' memory is configured - but that's just a guess; but it is an example of the kind of info I'd like to find out about it - just the briefest of high-level descriptions would be an improvement on what little we have now! On re-reading that page, I see it apparently supported some sort of user/kernel mode distinction, which might have require a tie-in to the CPU. (But not necessarily; if there was a flop in the KS11 which stored the 'CPU mode' bit, it might be automatically cleared on all interrupts. Not sure how it would have handled traps, though.) Even extremely dim memories will be an improvement on the blank canvas we have now! Noel From rudi.j.blom at gmail.com Sun Jun 23 12:37:19 2019 From: rudi.j.blom at gmail.com (Rudi Blom) Date: Sun, 23 Jun 2019 09:37:19 +0700 Subject: [TUHS] Any oldtimers remember anything about the KS11 on the 11/20? Message-ID: Probably already known, but to be sure Interesting options: MX11 - Memory Extension Option: this enabled the usage of 128 KW memory (18-bit addressing range); KS11: this option provided hardware memory protection, which the plain /20 lacked. Both options were developed by the Digital CSS (Computer Special Systems). http://hampage.hu/pdp-11/1120.html PS the page listed below has a very nice picture of the 'two fathers of UNIX" working on a PDP-11/20 http://hampage.hu/unix/unix1.html From cmhanson at eschatologist.net Sun Jun 23 14:38:26 2019 From: cmhanson at eschatologist.net (Chris Hanson) Date: Sat, 22 Jun 2019 21:38:26 -0700 Subject: [TUHS] CMU Mach sources? Message-ID: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Does anyone know whether CMU’s local Mach sources have been preserved? I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. — Chris Sent from my iPhone From lm at mcvoy.com Sun Jun 23 15:15:01 2019 From: lm at mcvoy.com (Larry McVoy) Date: Sat, 22 Jun 2019 22:15:01 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: <20190623051500.GL30743@mcvoy.com> I've read the Mach source. Not a fan. If you look around you can find SunOS 4.x sources, not legal but it is out there. If you read the SunOS vm code enough, it will come into focus for you. The code matches what you think a VM system should be. If you read the Mach code, nope, it's a tangled mess, there is no clear picture there. I read the papers and wanted to believe it was good, it is not. On Sat, Jun 22, 2019 at 09:38:26PM -0700, Chris Hanson wrote: > Does anyone know whether CMU???s local Mach sources have been preserved? > > I???m not just talking about MK84.default.tar.Z and so on, I???m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. > > I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. > > If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. > > ??? Chris > > Sent from my iPhone -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jsteve at superglobalmegacorp.com Sun Jun 23 18:04:43 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Sun, 23 Jun 2019 16:04:43 +0800 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> The fractions I've found of the 2.5 is on the CSRG #4 cd https://vpsland.superglobalmegacorp.com/install/Mach/MACH_CSRG_CD.7z You have to read the 404 to get the password. It changes frequently. I've been slowly trying to get 2.5 to build under Mt Xinu BSD/Mach. I've managed only the tools and bootloader so far.  I've just been busy moving offices the last week. Anyway in that directory is a bunch of other Mach stuff I've found. I also started trying to map the Mach 3.0 stuff https://unix.superglobalmegacorp.com/cgi-bin/cvsweb.cgi/?sortby=file&only_with_tag=MAIN&hideattic=1&hidenonreadable=1&f=u&logsort=date&cvsroot=Mach3&path= Including BSDSS a unknown to me port of BSD to Mach 3. It seems plenty of the Mach 1/2 stuff is hidden on CMU's servers in wait of the ATT v CSRG lawsuit.  And that everyone who worked on it left so it's locked away hidden. There is a vmdk of the mt Xinu installed that will run on qemu and most likely others as well.  So you can take it out for a test drive From: Chris Hanson Sent: Sunday, June 23, 2019 12:46 PM To: tuhs at minnie.tuhs.org Subject: [TUHS] CMU Mach sources? Does anyone know whether CMU’s local Mach sources have been preserved? I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. — Chris Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.bowling at kev009.com Sun Jun 23 18:27:04 2019 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Sun, 23 Jun 2019 01:27:04 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: If you find this stash I am looking for the rs6000 port that lived in the "unpublished" directory of ftp://ftp.cs.cmu.edu/afs/cs/project/mach/public/doc/unpublished/rs6k_install.ps Regards, Kevin On Sat, Jun 22, 2019 at 9:45 PM Chris Hanson wrote: > > Does anyone know whether CMU’s local Mach sources have been preserved? > > I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. > > I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. > > If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. > > — Chris > > Sent from my iPhone From andreww591 at gmail.com Sun Jun 23 18:52:30 2019 From: andreww591 at gmail.com (Andrew Warkentin) Date: Sun, 23 Jun 2019 02:52:30 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190623051500.GL30743@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <20190623051500.GL30743@mcvoy.com> Message-ID: On 6/22/19, Larry McVoy wrote: > I've read the Mach source. Not a fan. If you look around you can find > SunOS 4.x sources, not legal but it is out there. > > If you read the SunOS vm code enough, it will come into focus for you. > The code matches what you think a VM system should be. > > If you read the Mach code, nope, it's a tangled mess, there is no > clear picture there. > > I read the papers and wanted to believe it was good, it is not. > I've never actually read Mach's sources, but it doesn't surprise me that Mach's implementation is every bit as much of a train wreck as its design. Mach and the other kernels influenced by it basically destroyed the reputation of microkernels, even though there were microkernels that performed comparably to or better than monolithic kernels that actually predated Mach et al. There was one paper from 1992 [1] in which an early version of QNX 4 significantly outperformed System V in just about every category benchmarked (one of these days I should try to benchmark a newer version of QNX against Linux and see if the results still hold up). I wonder, had QNX or something like it had been the "next big thing" in the late 80s and early 90s rather than Mach, if microkernels wouldn't have become the dominant OS architecture, or at least a credible alternative. I also wonder if a modern highly optimized microkernel OS could still outperform monolithic kernels. Current open-source microkernel OSes seem to focus on academic purism rather than real-world performance (one of the biggest issues is that they tend to split subsystems up vertically with little benefit to security or stability, and probably adding significant overhead to system calls; e.g. on noux under Genode, a simple read() of a disk file, which is a single kernel call on a monolithic kernel and usually two context switches on QNX, takes at least 8 context switches - client->VFS->disk FS->partition driver->disk driver and back again). I may find out once I get UX/RT [2] (the QNX/Plan 9-like seL4-based OS I'm writing) working, since it will focus on real-world performance over academic purism (I'm an architectural purist in many ways, but I think purism must further performance and usability, not hinder them), although I don't necessarily expect early versions to have the best performance because the implementation will probably be suboptimal in places. I'm still a ways from getting it working though. [1] https://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.pdf [2] https://gitlab.com/uxrt From ron at ronnatalie.com Sun Jun 23 22:16:47 2019 From: ron at ronnatalie.com (Ron Natalie) Date: Sun, 23 Jun 2019 08:16:47 -0400 Subject: [TUHS] Any oldtimers remember anything about the KS11 on the 11/20? In-Reply-To: References: Message-ID: <38B5393D-5316-484F-B499-2931E8C0C035@ronnatalie.com> We always referred to CSS as the DEC kludge department. Sent from my iPhone > On Jun 22, 2019, at 22:37, Rudi Blom wrote: > > Probably already known, but to be sure > > Interesting options: MX11 - Memory Extension Option: this enabled the > usage of 128 KW memory (18-bit addressing range); KS11: this option > provided hardware memory protection, which the plain /20 lacked. Both > options were developed by the Digital CSS (Computer Special Systems). > http://hampage.hu/pdp-11/1120.html > > PS the page listed below has a very nice picture of the 'two fathers > of UNIX" working on a PDP-11/20 > http://hampage.hu/unix/unix1.html From nobozo at gmail.com Sun Jun 23 23:39:26 2019 From: nobozo at gmail.com (Jon Forrest) Date: Sun, 23 Jun 2019 06:39:26 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190623051500.GL30743@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <20190623051500.GL30743@mcvoy.com> Message-ID: <8d4e21dd-f2f8-347b-94c3-072898c01c54@gmail.com> On 6/22/2019 10:15 PM, Larry McVoy wrote: > I've read the Mach source. Not a fan. If you look around you can find > SunOS 4.x sources, not legal but it is out there. > If you read the Mach code, nope, it's a tangled mess, there is no > clear picture there. > > I read the papers and wanted to believe it was good, it is not. There's one thing to keep in mind about some software produced in an academic environment. Sometimes it's a collection of proofs of concept of clever ideas that various grad student have hacked together for their MS or PhD work. It's not intended to be production quality. I don't know anything about Mach, but this was certainly the state of Postgres when I worked in the Postgres group in 1991-1995. We tried to use it as the basis for a big research project (e.g. Sequoia 2000) but spent (wasted?) lots of time fighting Postgres issues. Eventually, long after I left the group, and after Mike Stonebraker left Berkeley, a group of people who weren't associated with UC Berkeley did a truly heroic job and "fixed" Postgres. The production quality Postgres you see now is the result. The BSD project was different, for all kinds of reasons. I wonder if Mach was a Postgres or BSD style project. Cordially, Jon Forrest From jsteve at superglobalmegacorp.com Mon Jun 24 00:03:56 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Sun, 23 Jun 2019 14:03:56 +0000 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8d4e21dd-f2f8-347b-94c3-072898c01c54@gmail.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <20190623051500.GL30743@mcvoy.com> <8d4e21dd-f2f8-347b-94c3-072898c01c54@gmail.com> Message-ID: I guess XNU is the 'fixed' version of Mach. At least it powers all those iPhones and iPads. And I do have the buildable source of Darwin 0.1/0.3 which is the equivalent of OS X Server 1.0 It's a.. Fusion of Mach 2.5 and 4.4BSD.  I've heard that NeXTSTEP is more 4.3BSD Get Outlook for Android On Sun, Jun 23, 2019 at 9:40 PM +0800, "Jon Forrest" wrote: On 6/22/2019 10:15 PM, Larry McVoy wrote: > I've read the Mach source. Not a fan. If you look around you can find > SunOS 4.x sources, not legal but it is out there. > If you read the Mach code, nope, it's a tangled mess, there is no > clear picture there. > > I read the papers and wanted to believe it was good, it is not. There's one thing to keep in mind about some software produced in an academic environment. Sometimes it's a collection of proofs of concept of clever ideas that various grad student have hacked together for their MS or PhD work. It's not intended to be production quality. I don't know anything about Mach, but this was certainly the state of Postgres when I worked in the Postgres group in 1991-1995. We tried to use it as the basis for a big research project (e.g. Sequoia 2000) but spent (wasted?) lots of time fighting Postgres issues. Eventually, long after I left the group, and after Mike Stonebraker left Berkeley, a group of people who weren't associated with UC Berkeley did a truly heroic job and "fixed" Postgres. The production quality Postgres you see now is the result. The BSD project was different, for all kinds of reasons. I wonder if Mach was a Postgres or BSD style project. Cordially, Jon Forrest -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnold at skeeve.com Sun Jun 23 23:59:41 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Sun, 23 Jun 2019 07:59:41 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8d4e21dd-f2f8-347b-94c3-072898c01c54@gmail.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <20190623051500.GL30743@mcvoy.com> <8d4e21dd-f2f8-347b-94c3-072898c01c54@gmail.com> Message-ID: <201906231359.x5NDxfCP028927@freefriends.org> Jon Forrest wrote: > I wonder if Mach was a Postgres or BSD style project. More the former than the latter. Mach entered the commercial world by way of NeXT, and there were a few other more or less production versions, such as the mt. Xinu one and MachTen (IIRC), but Mach never really caught on big. (There was even a port of Linux on top of Mach at some point.) Mach survives today in the kernel of Mac OS X (Darwin), but I think that's about it. Rick Rashid, who was the guiding professor for Mach, went to Microsoft Research, and as far as I can tell, fell off the radar screen for OS development work. (Anyone who knows different feel free to correct me.) Arnold From henry.r.bent at gmail.com Mon Jun 24 00:54:44 2019 From: henry.r.bent at gmail.com (Henry Bent) Date: Sun, 23 Jun 2019 10:54:44 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> Message-ID: I know that it's not exactly what was asked for, but DEC's OSF/1 V1.0 source is floating around out there in the ether, which is a fusion of Mach 2.5 and Ultrix 4.2. It's the original release, for MIPS-based DECstations, of what eventually became Digital UNIX / Tru64. -Henry On Sun, 23 Jun 2019 at 04:05, Jason Stevens wrote: > The fractions I've found of the 2.5 is on the CSRG #4 cd > > https://vpsland.superglobalmegacorp.com/install/Mach/MACH_CSRG_CD.7z > > > You have to read the 404 to get the password. It changes frequently. > > I've been slowly trying to get 2.5 to build under Mt Xinu BSD/Mach. > > I've managed only the tools and bootloader so far. I've just been busy > moving offices the last week. > > Anyway in that directory is a bunch of other Mach stuff I've found. > > I also started trying to map the Mach 3.0 stuff > > > https://unix.superglobalmegacorp.com/cgi-bin/cvsweb.cgi/?sortby=file&only_with_tag=MAIN&hideattic=1&hidenonreadable=1&f=u&logsort=date&cvsroot=Mach3&path= > > Including BSDSS a unknown to me port of BSD to Mach 3. > > It seems plenty of the Mach 1/2 stuff is hidden on CMU's servers in wait > of the ATT v CSRG lawsuit. And that everyone who worked on it left so it's > locked away hidden. > > There is a vmdk of the mt Xinu installed that will run on qemu and most > likely others as well. So you can take it out for a test drive > > > > > > *From: *Chris Hanson > *Sent: *Sunday, June 23, 2019 12:46 PM > *To: *tuhs at minnie.tuhs.org > *Subject: *[TUHS] CMU Mach sources? > > > > Does anyone know whether CMU’s local Mach sources have been preserved? > > > > I’m not just talking about MK84.default.tar.Z and so on, I’m talking > about all the bits of Mach that were used on cluster systems on campus, > prior to the switch to vendor UNIX. > > > > I know at least one person who had complete MacMach sources for the last > version, but threw out the backup discs with the sources in the process of > moving. So I know they exist. > > > > If nothing else, CMU did provide other sites their UX source package (eg > UX42), which was the BSD single server environment. So I know that has to > be out there, somewhere. > > > > — Chris > > > > Sent from my iPhone > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Mon Jun 24 07:52:33 2019 From: clemc at ccc.com (Clem Cole) Date: Sun, 23 Jun 2019 21:52:33 +0000 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> Message-ID: A couple of thoughts... 1.) Mach2.5 and 3.0 was part of an *extremely successful research project* but also did suffer from issues associated with being so. CSRG BTW *was not a research project*, contrary to its name, it was a support contract for DARPA since BTL would not support UNIX the way DEC, IBM, *et al*, did with their products. The reality is that Mach more than Thoth**), V Kernel, QNX, Sol, Chrous, Minux or any other uK of the day, changed the way people started to think about building an OS. Give Rashid and team credit - it showed some extremely interesting and aggressive ideas could be made to work. 2.) Comparing Mach with BSD or SunOS 4.3 is not really a valid comparison. They had different goals and targets. Comparing Ultrix, Tru64, or Mac OSx with SunOS (or Solaris for that matter) is fair. They all were products. They needed to be clean, maintainable and extensible [as Larry likes to point out, Sun traded a great deal of that away for political reasons with Solaris]. But the bottom line, you are comparing a test vehicle to a production one. And I while I agree with Larry on the internals and a lot of the actual code in Mach, I was always pretty damned impressed with what they crammed into a 5 lbs bag in a very short time. 3.) Mach2.5/386/Vax/etc.. << OSF/1 386 the later is similar what MtUnix shipped. Both are 'hybrid' kernels. But while MtUnix created a product with it, they were too small to do what DEC would later do. But the investment was greater than I think they could really afford. 4.) Mach 3.0 was from CMU, Mach 4.0 (which is still sort of available) was from the OSF/1 [this is a pure uK]. 3.) DEC OSF/1 (for MIPS) << Tru64 (for Alpha) - *a.k.a.* Digital UNIX - yes both started with a Mach 2.5 hybrid kernel and the later was mostly the same as OSF/1386, and both supported the Mach2.5 kernel message system - but DEC's team rewrote darned near everything in the kernel -- which in fact was both a bless and a curse [more in a minute]. Ok, so why have I bothered with all this mess. The fact is Mach was able to be turned into a product, both Apple and DEC did it. Apple had the advantage of starting with NextOS which (along with machTen) was the first short at making a 'product' out of it. But they invested a lot over the years and incrementally changed it. Enough to create XNU. DEC was a different story (which I lived a bit of personally). The DEC PMAX (mips) and the Intel 386 were the first references from OSF. OSF had an issue. IBM was supposed to deliver an OS, but for a number of reasons was not ready when OSF needed something. CMU had something that was 'good enough.' This is probably where Larry and I differ a little on shipping code. I'm a great believer figure out one solid goal and nailing it, and the rest is 'good enough' - i.e. fix in version 2. I think OSF/1 as a reference system nailed it. Their job was get something out as a starting base that ran on at least 2 workstations (and one server - which IIRC was an HP, maybe an Encore box) but able to be shipped on an AT&T V.3 unlimited license [which IBM had brought to the table]. The fact that they did not spend a lot of time cleaning up about CMU at this stage was not their job. The kernel had to be good enough - and it was (Larry might argue Mach2.5 vs SunOS 4.3 it was not as good technically - and he might be right - but that was not their job). So DEC gets a new code based. They have Ultrix (a product) for the PMAX. OSF has released the reference port. From a kernel code quality standpoint, OSF1 1.0/PMAX < Ultrix/RISC 4.5. They also are moving to a new 64-bit processor that is not going to run either VAX or PMAX binaries ( *i.e.* you will have to recompile). Two technical decisions and one marketing one were made at the management level that would later doom Tru64. First, it was agreed that Tru64 was going to be 'pure 64-bit' and it turned out >>none of the ISVs had clean code. Moreover, there were no tools to help find 64-bit issues. This single choice cost DEC 3 years in the ability to ship Tru64/Alpha. The second choice was DEC's team decided to re-write OSF/1 subsystem by subsystem. The argument would be: the XXX system sucks. It will never scale on a 64-bit system and it will not work for clusters. XXX was at least Memory Management, Terminal Handler, Basic I/O, SCSI, File System. The >>truth<< is each of these was actually right in the small, they did suck. But the fact is, they all were good enough to get the system out the door and get customers and ISV's starting the process of using the system. Yes, Megasafe is an excellent FS, but UFS was good enough to start. The marketing decision BTW, that not to ship Tru64/PMAX. Truth is it was running internally. But Marketing held that Tru64 was the sexy cool thing and did not want to make it available. The argument was they would have to support it. But the truth is that asking ISV's and customers to switch Architecture and OS in one jump, opened the door to consider Sun or HP (and with Tru64/Alpha's ecosystem taking 3 more years, people left DEC). ** Mike Malcolm was the primary author of Thoth as his PhD from Waterloo. HIs officemate, Kelly Booth (of the 'Graphics Killer-Bs) had a tee-shirt made that exhaled: 'Thoth Thucks' and gave them to the lot of the Waterloo folks. BTW, Mike and Cheridon would later go to Stanford and create V. Two of their students would create QNX with still lives. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnc at mercury.lcs.mit.edu Mon Jun 24 08:01:38 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 23 Jun 2019 18:01:38 -0400 (EDT) Subject: [TUHS] Any oldtimers remember anything about the KS11 on the 11/20? Message-ID: <20190623220138.8E5FC18C0CC@mercury.lcs.mit.edu> > From: Rudi Blom > Probably already known, but to be sure Interesting options: MX11 - > Memory Extension Option: this enabled the usage of 128 KW memory (18-bit > addressing range) Actually, I didn't know of that; something else to track down. Wrong list for that, though. Noel From jnc at mercury.lcs.mit.edu Mon Jun 24 08:08:56 2019 From: jnc at mercury.lcs.mit.edu (Noel Chiappa) Date: Sun, 23 Jun 2019 18:08:56 -0400 (EDT) Subject: [TUHS] CMU Mach sources? Message-ID: <20190623220856.AADD018C0CF@mercury.lcs.mit.edu> > From: Andrew Warkentin > Mach and the other kernels influenced by it basically destroyed the > reputation of microkernels ... a simple read() of a disk file, which is > a single kernel call on a monolithic kernel and usually two context > switches on QNX, takes at least 8 context switches - client->VFS->disk > FS->partition driver->disk driver and back again). Hammer-nail syndrome. When the only tool you have for creating separate subsystems is processes, you wind up with a lot of processes. Who'd a thunk it. A system with a segmented memory which allows subroutine calls from one subsystem to another will have a lot less overhead. It does take hardware support to be really efficient, though. The x86 processors had that support, until Intel dropped it from the latest ones because nobody used it. Excuse me while I go bang my head on a very hard wall until the pain stops. Noel From mah at mhorton.net Mon Jun 24 09:10:22 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Sun, 23 Jun 2019 16:10:22 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps Message-ID: Hunting around through my ancient stuff today, I ran across a 5.25" floppy drive labeled as having old Usenet maps. These may have historical interest. First off, I don't recognize the handwriting on the disk. It's not mine. Does anyone recognize it? (pic attached) I dug out my AT&T 6300 (XT clone) from the garage and booted it up. The floppy reads just fine. It has files with .MAP extension, which are ASCII Usenet maps from 1980 to 1984, and some .BBM files which are ASCII Usenet backbone maps up to 1987. There is also a file whose extension is .GRF from 1983 which claims to be a graphical Usenet map.  Does anyone have any idea what GRF is or what this map might be? I recall Brian Reid having a plotter-based Usenet geographic map in 84 or 85. I'd like to copy these files off for posterity. They read on DOS just fine. Is there a current best practice for copying off files? I would have guessed I'd need a to use the serial port, but my old PC has DOS 2.11 (not much serial copying software on it) and I don't have anything live with a serial port anymore. And it might not help with the GRF file. I took some photos of the screen with the earliest maps (the ones that fit on one screen.) So it's an option to type things in, at least for the early ASCII ones. Thanks,     Mary Ann -------------- next part -------------- A non-text attachment was scrubbed... Name: Floppy-Label.png Type: image/png Size: 64393 bytes Desc: not available URL: From krewat at kilonet.net Mon Jun 24 09:52:46 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Sun, 23 Jun 2019 19:52:46 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: Does the AT&T have a serial port? Kermit would be the way I'd go, but since you say you have nothing with serial ports, that could be a problem. A cheap usb-to-serial port might be in order. Then you can run Kermit 95 on a Windows 7 or earlier machine. (might work on later OS's too, but it's not supported) The flip side is how to get Kermit onto the DOS machine. I used a floppy recovery service a while back to read my old Commodore 64/PET disks - he was relatively inexpensive, and very responsive. http://retrofloppy.com/ On 6/23/2019 7:10 PM, Mary Ann Horton Gmail wrote: > Hunting around through my ancient stuff today, I ran across a 5.25" > floppy drive labeled as having old Usenet maps. These may have > historical interest. > > First off, I don't recognize the handwriting on the disk. It's not > mine. Does anyone recognize it? (pic attached) > > I dug out my AT&T 6300 (XT clone) from the garage and booted it up. > The floppy reads just fine. It has files with .MAP extension, which > are ASCII Usenet maps from 1980 to 1984, and some .BBM files which are > ASCII Usenet backbone maps up to 1987. > > There is also a file whose extension is .GRF from 1983 which claims to > be a graphical Usenet map.  Does anyone have any idea what GRF is or > what this map might be? I recall Brian Reid having a plotter-based > Usenet geographic map in 84 or 85. > > I'd like to copy these files off for posterity. They read on DOS just > fine. Is there a current best practice for copying off files? I would > have guessed I'd need a to use the serial port, but my old PC has DOS > 2.11 (not much serial copying software on it) and I don't have > anything live with a serial port anymore. And it might not help with > the GRF file. > > I took some photos of the screen with the earliest maps (the ones that > fit on one screen.) So it's an option to type things in, at least for > the early ASCII ones. > > Thanks, > >     Mary Ann > > From gtaylor at tnetconsulting.net Mon Jun 24 09:57:43 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 23 Jun 2019 17:57:43 -0600 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> On 6/23/19 5:10 PM, Mary Ann Horton Gmail wrote: > Hunting around through my ancient stuff today, I ran across a 5.25" > floppy drive labeled as having old Usenet maps. These may have > historical interest. Intriguing. > First off, I don't recognize the handwriting on the disk. It's not mine. > Does anyone recognize it? (pic attached) > > I dug out my AT&T 6300 (XT clone) from the garage and booted it up. The > floppy reads just fine. It has files with .MAP extension, which are > ASCII Usenet maps from 1980 to 1984, and some .BBM files which are ASCII > Usenet backbone maps up to 1987. > > There is also a file whose extension is .GRF from 1983 which claims to > be a graphical Usenet map.  Does anyone have any idea what GRF is or > what this map might be? I recall Brian Reid having a plotter-based > Usenet geographic map in 84 or 85. Hum. > I'd like to copy these files off for posterity. They read on DOS just > fine. Is there a current best practice for copying off files? I would > have guessed I'd need a to use the serial port, but my old PC has DOS > 2.11 (not much serial copying software on it) and I don't have anything > live with a serial port anymore. And it might not help with the GRF file. I wonder if you could get away with something as simple as a null modem cable and the following commands: Source: copy a:\file COM1 Destination: copy COM1 c:\file Does the source machine have a hard drive? Do you have a blank (sacrificial) floppy disk? Can you copy the files anywhere so that they are in more than one place? Do you have a printer that you could create a (hexadecimal) printout? Do you have a machine that can accept a USB-to-Serial adapter? What about something like a Raspberry Pi? It has a serial port (though it needs a level shifter). > I took some photos of the screen with the earliest maps (the ones that > fit on one screen.) So it's an option to type things in, at least for > the early ASCII ones. I'd be interested in seeing them. Do you have a place that you can upload them to? -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From tytso at mit.edu Mon Jun 24 09:54:00 2019 From: tytso at mit.edu (Theodore Ts'o) Date: Sun, 23 Jun 2019 19:54:00 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190623220856.AADD018C0CF@mercury.lcs.mit.edu> References: <20190623220856.AADD018C0CF@mercury.lcs.mit.edu> Message-ID: <20190623235400.GA1805@mit.edu> On Sun, Jun 23, 2019 at 06:08:56PM -0400, Noel Chiappa wrote: > > Hammer-nail syndrome. > > When the only tool you have for creating separate subsystems is processes, you > wind up with a lot of processes. Who'd a thunk it. > > A system with a segmented memory which allows subroutine calls from one subsystem > to another will have a lot less overhead. It does take hardware support to be > really efficient, though. The x86 processors had that support, until Intel dropped > it from the latest ones because nobody used it. One of the real problems with how x86 implemented segments was that the segments were layered on top of the 32-bit virtual address space implemented by the page tables. So if you wanted to use a pure segmented architecture, ala Multics, you run into the same problem as 32-bit IP addresses. If you need to allow for segments to grow, you very quickly fragment the 32-bit address space. If I recall correctly, this wasn't an issue with Multics because the DPS-8 had page tables for each segment. Maybe if Intel/AMD had kept segmentation support when x86_64 was developed, trying to do something more novel with segments could have worked when we went to 64-bits (or maybe, like IPv6, what's really needed is 128-bits of address space :-P). But, Itanic was supposed to be the dominant 64-bit architecture that was going to take over the whole world, and when that turned out not to be the case, AMD threw together x86_64 as the "just good enough" architectural extension. - Ted From gtaylor at tnetconsulting.net Mon Jun 24 10:02:28 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 23 Jun 2019 18:02:28 -0600 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: On 6/23/19 5:52 PM, Arthur Krewat wrote: > Does the AT&T have a serial port? > > Kermit would be the way I'd go, but since you say you have nothing with > serial ports, that could be a problem. A cheap usb-to-serial port might > be in order. Then you can run Kermit 95 on a Windows 7 or earlier > machine. (might work on later OS's too, but it's not supported) > > The flip side is how to get Kermit onto the DOS machine. Does Kermit have an option like INTERLNK & INTERSVR have where you can run a "copy COM1 INTERxxx.EXE" to push the software across the serial port? I wonder what the requirements are for INTERLNK & INTERSVR. I don't know if they would go back to (MS-)DOS 2.11 or not. > I used a floppy recovery service a while back to read my old Commodore > 64/PET disks - he was relatively inexpensive, and very responsive. > > http://retrofloppy.com/ If the machine is able to read the files without error, then a recovery service might not be necessary. IMHO it's a question of getting one or more copies onto something else so that the existing floppy isn't the only copy. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From tytso at mit.edu Mon Jun 24 10:03:42 2019 From: tytso at mit.edu (Theodore Ts'o) Date: Sun, 23 Jun 2019 20:03:42 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <20190624000342.GB1805@mit.edu> On Sun, Jun 23, 2019 at 04:10:22PM -0700, Mary Ann Horton Gmail wrote: > I'd like to copy these files off for posterity. They read on DOS just fine. > Is there a current best practice for copying off files? I would have guessed > I'd need a to use the serial port, but my old PC has DOS 2.11 (not much > serial copying software on it) and I don't have anything live with a serial > port anymore. And it might not help with the GRF file. Maybe this? http://www.deviceside.com/fc5025.html - Ted From web at loomcom.com Mon Jun 24 10:19:54 2019 From: web at loomcom.com (Seth Morabito) Date: Sun, 23 Jun 2019 17:19:54 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <698037ac-0a46-4c67-a455-2cd88f2997a9@www.fastmail.com> On Sun, Jun 23, 2019, at 4:11 PM, Mary Ann Horton Gmail wrote: > I'd like to copy these files off for posterity. They read on DOS just > fine. Is there a current best practice for copying off files? I would > have guessed I'd need a to use the serial port, but my old PC has DOS > 2.11 (not much serial copying software on it) and I don't have anything > live with a serial port anymore. And it might not help with the GRF file. If you can't find a more expedient way, I'd be happy to help read off the files if you're willing to part with the disk for a few days. I have experience reading many old diskette formats, and a PC dedicated to the task running DOS 6.22 and Windows for Workgroups 3.11. I definitely agree it would be good to save these files for posterity. > Thanks, > >     Mary Ann -Seth -- Seth Morabito Poulsbo, WA web at loomcom.com From lm at mcvoy.com Mon Jun 24 10:33:09 2019 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 23 Jun 2019 17:33:09 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <20190624003308.GC20473@mcvoy.com> I'd look around for an external floppy drive, plug it into a modern machine, download knoppix, boot that and it will read the disk. On Sun, Jun 23, 2019 at 04:10:22PM -0700, Mary Ann Horton Gmail wrote: > Hunting around through my ancient stuff today, I ran across a 5.25" floppy > drive labeled as having old Usenet maps. These may have historical interest. > > First off, I don't recognize the handwriting on the disk. It's not mine. > Does anyone recognize it? (pic attached) > > I dug out my AT&T 6300 (XT clone) from the garage and booted it up. The > floppy reads just fine. It has files with .MAP extension, which are ASCII > Usenet maps from 1980 to 1984, and some .BBM files which are ASCII Usenet > backbone maps up to 1987. > > There is also a file whose extension is .GRF from 1983 which claims to be a > graphical Usenet map.?? Does anyone have any idea what GRF is or what this > map might be? I recall Brian Reid having a plotter-based Usenet geographic > map in 84 or 85. > > I'd like to copy these files off for posterity. They read on DOS just fine. > Is there a current best practice for copying off files? I would have guessed > I'd need a to use the serial port, but my old PC has DOS 2.11 (not much > serial copying software on it) and I don't have anything live with a serial > port anymore. And it might not help with the GRF file. > > I took some photos of the screen with the earliest maps (the ones that fit > on one screen.) So it's an option to type things in, at least for the early > ASCII ones. > > Thanks, > > ?????? Mary Ann > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From mah at mhorton.net Mon Jun 24 10:35:52 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Sun, 23 Jun 2019 17:35:52 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> These are great ideas. I can easily get USB-to-serial (and even USB-to-parallel) cables online that will fit the PC/XT compatible DB-25 plugs on the back of the PC.  I'll have to figure out how to fiddle with the baud rates and such. I solved the GRF file puzzle.  It turns out it's a text file - a Usenet article. And the same article is in the Google archive. https://groups.google.com/forum/#!search/group$3Anet.news.map$20philabs!dal/net.news.map/lhqyD7MOFe8/v0CQFMZyGboJ There is a cutoff notice at the end, both on the Usenet article and on the floppy file, but that may be intentional.  I'll have some fiddling to do.     Mary Ann On 6/23/19 5:02 PM, Grant Taylor via TUHS wrote: > On 6/23/19 5:52 PM, Arthur Krewat wrote: >> Does the AT&T have a serial port? >> >> Kermit would be the way I'd go, but since you say you have nothing >> with serial ports, that could be a problem. A cheap usb-to-serial >> port might be in order. Then you can run Kermit 95 on a Windows 7 or >> earlier machine. (might work on later OS's too, but it's not supported) >> >> The flip side is how to get Kermit onto the DOS machine. > > Does Kermit have an option like INTERLNK & INTERSVR have where you can > run a "copy COM1 INTERxxx.EXE" to push the software across the serial > port? > > I wonder what the requirements are for INTERLNK & INTERSVR.  I don't > know if they would go back to (MS-)DOS 2.11 or not. > >> I used a floppy recovery service a while back to read my old >> Commodore 64/PET disks - he was relatively inexpensive, and very >> responsive. >> >> http://retrofloppy.com/ > > If the machine is able to read the files without error, then a > recovery service might not be necessary.  IMHO it's a question of > getting one or more copies onto something else so that the existing > floppy isn't the only copy. > > > From mah at mhorton.net Mon Jun 24 10:40:16 2019 From: mah at mhorton.net (Mary Ann Horton Gmail) Date: Sun, 23 Jun 2019 17:40:16 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> References: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> Message-ID: <271b3e60-6b12-f579-01ff-5df152812a4f@mhorton.net> I put the screenshots (literally - with my phone) here: http://maryannhorton.com/usenet/ Note the preposterous claim that the 4/15/81 map is the "Backbone" - I have no idea where that came from. The backbone was first proposed 2 years later. Clearly this is a full map of Usenet as of 4/15/81.     Mary Ann On 6/23/19 4:57 PM, Grant Taylor via TUHS wrote: > >> I took some photos of the screen with the earliest maps (the ones >> that fit on one screen.) So it's an option to type things in, at >> least for the early ASCII ones. > > I'd be interested in seeing them.  Do you have a place that you can > upload them to? > > > From krewat at kilonet.net Mon Jun 24 10:53:19 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Sun, 23 Jun 2019 20:53:19 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> Message-ID: Both the AT&T and the USB cable will be "DTE" (Data Terminal Equipment - ala terminal) vs. "DCE" (Data Communication Equipment - ala modem) - you'll need a null-modem cable to correct that mismatch. Basically, if not using hardware handshake, swap pin 2 and 3. ;) On 6/23/2019 8:35 PM, Mary Ann Horton Gmail wrote: > These are great ideas. I can easily get USB-to-serial (and even > USB-to-parallel) cables online that will fit the PC/XT compatible > DB-25 plugs on the back of the PC.  I'll have to figure out how to > fiddle with the baud rates and such. > > I solved the GRF file puzzle.  It turns out it's a text file - a > Usenet article. And the same article is in the Google archive. > > https://groups.google.com/forum/#!search/group$3Anet.news.map$20philabs!dal/net.news.map/lhqyD7MOFe8/v0CQFMZyGboJ > > > There is a cutoff notice at the end, both on the Usenet article and on > the floppy file, but that may be intentional.  I'll have some fiddling > to do. > >     Mary Ann > > On 6/23/19 5:02 PM, Grant Taylor via TUHS wrote: >> On 6/23/19 5:52 PM, Arthur Krewat wrote: >>> Does the AT&T have a serial port? >>> >>> Kermit would be the way I'd go, but since you say you have nothing >>> with serial ports, that could be a problem. A cheap usb-to-serial >>> port might be in order. Then you can run Kermit 95 on a Windows 7 or >>> earlier machine. (might work on later OS's too, but it's not supported) >>> >>> The flip side is how to get Kermit onto the DOS machine. >> >> Does Kermit have an option like INTERLNK & INTERSVR have where you >> can run a "copy COM1 INTERxxx.EXE" to push the software across the >> serial port? >> >> I wonder what the requirements are for INTERLNK & INTERSVR. I don't >> know if they would go back to (MS-)DOS 2.11 or not. >> >>> I used a floppy recovery service a while back to read my old >>> Commodore 64/PET disks - he was relatively inexpensive, and very >>> responsive. >>> >>> http://retrofloppy.com/ >> >> If the machine is able to read the files without error, then a >> recovery service might not be necessary.  IMHO it's a question of >> getting one or more copies onto something else so that the existing >> floppy isn't the only copy. >> >> >> > From lm at mcvoy.com Mon Jun 24 10:56:14 2019 From: lm at mcvoy.com (Larry McVoy) Date: Sun, 23 Jun 2019 17:56:14 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> Message-ID: <20190624005614.GF20473@mcvoy.com> Arthur's comments bring back some memories. I probably still have this, a ribbon serial cable with male and female connectors on both ends and a breadboard in the middle. I could hook anything to anything :) That said, I'm *ecstatic* that I no longer have to deal with serial ports. On Sun, Jun 23, 2019 at 08:53:19PM -0400, Arthur Krewat wrote: > Both the AT&T and the USB cable will be "DTE" (Data Terminal Equipment - ala > terminal) vs. "DCE" (Data Communication Equipment - ala modem) - you'll need > a null-modem cable to correct that mismatch. Basically, if not using > hardware handshake, swap pin 2 and 3. ;) > > > On 6/23/2019 8:35 PM, Mary Ann Horton Gmail wrote: > >These are great ideas. I can easily get USB-to-serial (and even > >USB-to-parallel) cables online that will fit the PC/XT compatible DB-25 > >plugs on the back of the PC.?? I'll have to figure out how to fiddle with > >the baud rates and such. > > > >I solved the GRF file puzzle.?? It turns out it's a text file - a Usenet > >article. And the same article is in the Google archive. > > > >https://groups.google.com/forum/#!search/group$3Anet.news.map$20philabs!dal/net.news.map/lhqyD7MOFe8/v0CQFMZyGboJ > > > > > >There is a cutoff notice at the end, both on the Usenet article and on the > >floppy file, but that may be intentional.?? I'll have some fiddling to do. > > > >?????? Mary Ann > > > >On 6/23/19 5:02 PM, Grant Taylor via TUHS wrote: > >>On 6/23/19 5:52 PM, Arthur Krewat wrote: > >>>Does the AT&T have a serial port? > >>> > >>>Kermit would be the way I'd go, but since you say you have nothing > >>>with serial ports, that could be a problem. A cheap usb-to-serial port > >>>might be in order. Then you can run Kermit 95 on a Windows 7 or > >>>earlier machine. (might work on later OS's too, but it's not > >>>supported) > >>> > >>>The flip side is how to get Kermit onto the DOS machine. > >> > >>Does Kermit have an option like INTERLNK & INTERSVR have where you can > >>run a "copy COM1 INTERxxx.EXE" to push the software across the serial > >>port? > >> > >>I wonder what the requirements are for INTERLNK & INTERSVR. I don't know > >>if they would go back to (MS-)DOS 2.11 or not. > >> > >>>I used a floppy recovery service a while back to read my old Commodore > >>>64/PET disks - he was relatively inexpensive, and very responsive. > >>> > >>>http://retrofloppy.com/ > >> > >>If the machine is able to read the files without error, then a recovery > >>service might not be necessary.?? IMHO it's a question of getting one or > >>more copies onto something else so that the existing floppy isn't the > >>only copy. > >> > >> > >> > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From krewat at kilonet.net Mon Jun 24 10:50:47 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Sun, 23 Jun 2019 20:50:47 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <061daf0b-e01c-075f-2b01-ef0db7bbf2b6@kilonet.net> Another thing to think about, and that's only because I'm a dumpster-diver, is what's in the unallocated sectors? ;) On 6/23/2019 8:02 PM, Grant Taylor via TUHS wrote: > On 6/23/19 5:52 PM, Arthur Krewat wrote: >> Does the AT&T have a serial port? >> >> Kermit would be the way I'd go, but since you say you have nothing >> with serial ports, that could be a problem. A cheap usb-to-serial >> port might be in order. Then you can run Kermit 95 on a Windows 7 or >> earlier machine. (might work on later OS's too, but it's not supported) >> >> The flip side is how to get Kermit onto the DOS machine. > > Does Kermit have an option like INTERLNK & INTERSVR have where you can > run a "copy COM1 INTERxxx.EXE" to push the software across the serial > port? Not that I'm aware of. Things like NULs, and ^S can really ruin your day. Not to mention ^Z which a DOS copy might interpret as EOF. I only ever wrote programs to access the UART directly, but I remember my attempts at COPY or other DOS-specific ways of dealing with serial ports were never very successful. But that might have had more to do with buffer overruns (or in the case of the 8250 in the XT, a lack of a FIFO ala-16550 in the first place). Redirecting LPT1 to COM1 using MODE, I used to print to an LA100 using hardware handshaking. >> I used a floppy recovery service a while back to read my old >> Commodore 64/PET disks - he was relatively inexpensive, and very >> responsive. >> >> http://retrofloppy.com/ > > If the machine is able to read the files without error, then a > recovery service might not be necessary.  IMHO it's a question of > getting one or more copies onto something else so that the existing > floppy isn't the only copy. Of course, but in some cases, a few $'s thrown at the problem is easier than messing around with something you don't want to mess around with ;) I would be happy to contribute. From krewat at kilonet.net Mon Jun 24 11:12:08 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Sun, 23 Jun 2019 21:12:08 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <20190624005614.GF20473@mcvoy.com> References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> <20190624005614.GF20473@mcvoy.com> Message-ID: On 6/23/2019 8:56 PM, Larry McVoy wrote: > Arthur's comments bring back some memories. I probably still have this, > a ribbon serial cable with male and female connectors on both ends and > a breadboard in the middle. I could hook anything to anything :) > > That said, I'm *ecstatic* that I no longer have to deal with serial ports. > I did a lot of work with RS232 in the 80's to the point where my friend said I had coined a new phrase - basically sounds like "Are-Ess-too-turdy-too" said really fast ;) (I'm from NY) From serial lines that were slow, going to parallel interfaces for printers, parallel SCSI, and a few other parallel interfaces, I thought were nice, now we've gone back to SATA, SAS and PCI-E lanes that are basically serial interfaces. I have an RS232 breakout box I use for situations like this. Still having to deal with DTE-DCE issues to this day with Cisco, Nortel/Avaya, and other network, telecom or even SAN equipment. A recent Dell Compellent SC7xxxx I installed came with a USB cable, but it's really a USB to RS232 interface built into the controllers. SMH. ak From pechter at gmail.com Mon Jun 24 11:31:44 2019 From: pechter at gmail.com (William Pechter) Date: Sun, 23 Jun 2019 21:31:44 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> <20190624005614.GF20473@mcvoy.com> Message-ID: <0be48695-c49c-a3f6-5905-aa36878b209a@gmail.com> On 6/23/2019 9:12 PM, Arthur Krewat wrote: > > On 6/23/2019 8:56 PM, Larry McVoy wrote: >> Arthur's comments bring back some memories.  I probably still have this, >> a ribbon serial cable with male and female connectors on both ends and >> a breadboard in the middle.  I could hook anything to anything :) >> >> That said, I'm *ecstatic* that I no longer have to deal with serial >> ports. >> > > I did a lot of work with RS232 in the 80's to the point where my > friend said I had coined a new phrase - basically sounds like > "Are-Ess-too-turdy-too" said really fast ;) (I'm from NY) > > From serial lines that were slow, going to parallel interfaces for > printers, parallel SCSI, and a few other parallel interfaces, I > thought were nice, now we've gone back to SATA, SAS and PCI-E lanes > that are basically serial interfaces. > > I have an RS232 breakout box I use for situations like this. Still > having to deal with DTE-DCE issues to this day with Cisco, > Nortel/Avaya, and other network, telecom or even SAN equipment. A > recent Dell Compellent SC7xxxx I installed came with a USB cable, but > it's really a USB to RS232 interface built into the controllers. SMH. > > ak > > I'm still partial to having machines with real serial ports on 'em although I have all the USB serial/parallel cables as well. Still have a couple of desktops with Real RS232 ports just in case.  My old K6-2 has both the 5 1/4 and 3 1/2 inch floppies -- just in case. Bill From pechter at gmail.com Mon Jun 24 11:37:12 2019 From: pechter at gmail.com (William Pechter) Date: Sun, 23 Jun 2019 21:37:12 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <271b3e60-6b12-f579-01ff-5df152812a4f@mhorton.net> References: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> <271b3e60-6b12-f579-01ff-5df152812a4f@mhorton.net> Message-ID: On 6/23/2019 8:40 PM, Mary Ann Horton Gmail wrote: > I put the screenshots (literally - with my phone) here: > > http://maryannhorton.com/usenet/ > > Note the preposterous claim that the 4/15/81 map is the "Backbone" - I > have no idea where that came from. The backbone was first proposed 2 > years later. Clearly this is a full map of Usenet as of 4/15/81. > >     Mary Ann > > On 6/23/19 4:57 PM, Grant Taylor via TUHS wrote: >> >>> I took some photos of the screen with the earliest maps (the ones >>> that fit on one screen.) So it's an option to type things in, at >>> least for the early ASCII ones. >> >> I'd be interested in seeing them.  Do you have a place that you can >> upload them to? >> >> >> I checked my maps and have a version from 1996 and 2000... Ah for the days of being one hop from the house to !pyramid and the rest of the world. Bill From bakul at bitblocks.com Mon Jun 24 11:40:13 2019 From: bakul at bitblocks.com (Bakul Shah) Date: Sun, 23 Jun 2019 18:40:13 -0700 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: Your message of "Sun, 23 Jun 2019 20:53:19 -0400." References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> Message-ID: <20190624014020.D24AD156E40C@mail.bitblocks.com> On Sun, 23 Jun 2019 20:53:19 -0400 Arthur Krewat wrote: > Both the AT&T and the USB cable will be "DTE" (Data Terminal Equipment - > ala terminal) vs. "DCE" (Data Communication Equipment - ala modem) - > you'll need a null-modem cable to correct that mismatch. Basically, if > not using hardware handshake, swap pin 2 and 3. ;) Since mid 80s I have used Dave Yost's wiring scheme that converts a DB-25 or DB-9 adapter to an RJ-45 socket: http://yost.com/computers/RJ45-serial/ You wire any DB-25 or DB-9 DCE or DTE male or female adapter so that the RJ-45 socket has the above pinout. You figure out which device needs what kind of adapter and permanently attach the adapter. Now you can use a standard "half-twist" phone cable with 4, 6 or 8 wires and connect anything to anything. My last device with a real RS-232 interface (a CP-290 X10 controller) where I used this died 4-5 years ago. I still use serial ports on RaspberryPis but talk to them via serial<->USB adapters (these are 3.3V uarts). From krewat at kilonet.net Mon Jun 24 11:51:29 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Sun, 23 Jun 2019 21:51:29 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <0be48695-c49c-a3f6-5905-aa36878b209a@gmail.com> References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> <20190624005614.GF20473@mcvoy.com> <0be48695-c49c-a3f6-5905-aa36878b209a@gmail.com> Message-ID: On 6/23/2019 9:31 PM, William Pechter wrote: > I'm still partial to having machines with real serial ports on 'em > although I have all the USB serial/parallel cables as well. > > Still have a couple of desktops with Real RS232 ports just in case.  > My old K6-2 has both the 5 1/4 and 3 1/2 inch floppies -- just in case. Oh, so do I - one notable thing recently, a few Dell T7910 workstations (huge mothers, dual Xeons, plumbed x16 PCI-E interfaces to the second CPU, etc) actually had a male DB9 on the back. I was somewhat impressed. ak From usotsuki at buric.co Mon Jun 24 11:57:16 2019 From: usotsuki at buric.co (Steve Nickolas) Date: Sun, 23 Jun 2019 21:57:16 -0400 (EDT) Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> References: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> Message-ID: There's always the option of Interlnk if one has PC DOS 5.02 or later, or MS-DOS 6. I think it has a way to send itself over serial to a machine with DOS 3.3 or later and I want to say the 6300 came with 3.3. From usotsuki at buric.co Mon Jun 24 11:58:20 2019 From: usotsuki at buric.co (Steve Nickolas) Date: Sun, 23 Jun 2019 21:58:20 -0400 (EDT) Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <20190624003308.GC20473@mcvoy.com> References: <20190624003308.GC20473@mcvoy.com> Message-ID: On Sun, 23 Jun 2019, Larry McVoy wrote: > I'd look around for an external floppy drive, plug it into a modern machine, > download knoppix, boot that and it will read the disk. I dunno about you but I have had horrible luck with USB drives...to be fair, only one of them. -uso. From pechter at gmail.com Mon Jun 24 12:09:06 2019 From: pechter at gmail.com (pechter at gmail.com) Date: Sun, 23 Jun 2019 22:09:06 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> Message-ID: <14316a01-0f60-4e34-a616-fd592c436bff.maildroid@localhost> The 6300 came with 2.11 and there was an upgrade to 3.2.3 IIRC. Bill Sent from MailDroid -----Original Message----- From: Steve Nickolas To: tuhs at minnie.tuhs.org Sent: Sun, 23 Jun 2019 22:06 Subject: Re: [TUHS] Floppy to modern files for Usenet maps There's always the option of Interlnk if one has PC DOS 5.02 or later, or MS-DOS 6. I think it has a way to send itself over serial to a machine with DOS 3.3 or later and I want to say the 6300 came with 3.3. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsteve at superglobalmegacorp.com Mon Jun 24 13:17:01 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Mon, 24 Jun 2019 11:17:01 +0800 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <271b3e60-6b12-f579-01ff-5df152812a4f@mhorton.net> References: <1d46d78d-d5c2-993d-1a66-1732e7bf5f29@spamtrap.tnetconsulting.net> <271b3e60-6b12-f579-01ff-5df152812a4f@mhorton.net> Message-ID: I found the posting reference here: http://maryannhorton.com/usenet/010180.MAP.jpg https://utzoo.superglobalmegacorp.com/usenet/news005f1/b12/net.general/1501.txt http://maryannhorton.com/usenet/032183.BBM.jpg https://utzoo.superglobalmegacorp.com/usenet/news004f1/b11/net.news/541.txt http://maryannhorton.com/usenet/040581.MAP.jpg The map is mentioned in here: http://www.ais.org/~ronda/new.papers/articles/earlyversion.arpanet.txt As far as BBM files I just find this: Name: Graphics Display System (GDS) Purpose: Image display, conversion, thumbnail catalogs Version: 3.1e Author: Photodex Corporation FTP: ftp://ftp.netcom.com/pub/ph/photodex CIS: GO PHOTODEX, GDS Viewing Software (Lib 3) Imports: ANS (ANSI text), BBM, BMF, BMP, CUT/PAL (Dr. Halo), DIB, DL, FLC, FLI, FLX, GDS, GIF, GL, HAM, ICO, IFF/ILBM, IMG, JFI, JPG (JFIF), LBM, MAC, MP2 & MPA (MPEG Audio), MPG, PCC, PCX, RAX, RFX, RLE, SC? (ColoRIX), TGA, TIFF and TXT (text). Exports: ANS (ANSI text), BBM, BMP, CUT/PAL (Dr. Halo), GDS, GIF, IFF/ILBM, IMG, JFI, JPG (JFIF), LBM, PCC, PCX, RAX, RFX, RLE, SC? (ColoRIX), TGA, and TIFF. Features: File viewing, batch conversions, easy thumbnail catalog creation with many options, slide shows, automatic configuration. Includes 5000+ lines of hypertext help and prints 98 page cross referenced manual. Supports HGC, CGA, EGA, S-EGA, VGA, SVGA, XGA, TIGA and VESA. Registered versions print to HP PCL & 100% compatible laser and inkjet printers. Comments: Used by CompuServe sysops to catalog over 40,000 images regularly. ASP approved shareware. No idea if it survives at all. If you can screen shot the headers of the posts it’s easier to find them in old usenet archives. The UTZOO stuff is an AMAZING resource. It’s been incredibly valuable looking for old stuff. Of course the real fun is in searching it. Feel free to wget the BZ2 files from: https://utzoo.superglobalmegacorp.com/ Just don’t spider into the /usenet directory as it’s all the files extracted and you’ll no doubt be pulling several million files… much easier to get the .tar.bz2 ‘s. From: Mary Ann Horton Gmail Sent: Monday, June 24, 2019 8:40 AM To: tuhs at minnie.tuhs.org Subject: Re: [TUHS] Floppy to modern files for Usenet maps I put the screenshots (literally - with my phone) here: http://maryannhorton.com/usenet/ Note the preposterous claim that the 4/15/81 map is the "Backbone" - I have no idea where that came from. The backbone was first proposed 2 years later. Clearly this is a full map of Usenet as of 4/15/81.     Mary Ann On 6/23/19 4:57 PM, Grant Taylor via TUHS wrote: > >> I took some photos of the screen with the earliest maps (the ones >> that fit on one screen.) So it's an option to type things in, at >> least for the early ASCII ones. > > I'd be interested in seeing them.  Do you have a place that you can > upload them to? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gtaylor at tnetconsulting.net Mon Jun 24 13:20:19 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Sun, 23 Jun 2019 21:20:19 -0600 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <20190624014020.D24AD156E40C@mail.bitblocks.com> References: <92db46d5-d821-a792-7753-dfc5f2898cbf@mhorton.net> <20190624014020.D24AD156E40C@mail.bitblocks.com> Message-ID: <8a47729f-d879-8e0d-f88b-eeb184a77f7d@spamtrap.tnetconsulting.net> On 6/23/19 7:40 PM, Bakul Shah wrote: > Since mid 80s I have used Dave Yost's wiring scheme that converts a > DB-25 or DB-9 adapter to an RJ-45 socket: > > http://yost.com/computers/RJ45-serial/ > > You wire any DB-25 or DB-9 DCE or DTE male or female adapter so > that the RJ-45 socket has the above pinout. You figure out which > device needs what kind of adapter and permanently attach the adapter. > Now you can use a standard "half-twist" phone cable with 4, 6 or 8 > wires and connect anything to anything. +10 for Yost wiring scheme. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From rminnich at gmail.com Tue Jun 25 02:21:37 2019 From: rminnich at gmail.com (ron minnich) Date: Mon, 24 Jun 2019 09:21:37 -0700 Subject: [TUHS] Any oldtimers remember anything about the KS11 on the -11/20? In-Reply-To: <20190622181719.E10E918C0B4@mercury.lcs.mit.edu> References: <20190622181719.E10E918C0B4@mercury.lcs.mit.edu> Message-ID: just double checking, in case the odd.html had a typo: it was a KS11, not a KT11-B? Is there any chance there was an error in recollection? ron On Sat, Jun 22, 2019 at 11:18 AM Noel Chiappa wrote: > > This is an appeal to the few really-old-timers (i.e. who used the PDP-11/20 > version of Unix) on the list to see if they remember _anything_ of the KS11 > memory mapping unit used on that machine. > > Next to nothing is known of the KS11. Dennis' page "Odd Comments and Strange > Doings in Unix": > > https://www.bell-labs.com/usr/dmr/www/odd.html > > has a story involving it (at the end), and that is all I've ever been able > to find out about it. > > I don't expect documentation, but I am hoping someone will remember > _basically_ what it did. My original guess as to its functionality, from that > page, was that it's not part of the CPU, but a UNIBUS device, placed between > the UNIBUS leaving the CPU, and the rest of the the bus, which perhaps mapped > addresses around (and definitely limited user access to I/O page addresses). > > It might also have mapped part of the UNIBUS space which the -11/20 CPU _can_ > see (i.e. in the 0-56KB range) up to UNIBUS higher addresses, where 'extra' > memory is configured - but that's just a guess; but it is an example of the > kind of info I'd like to find out about it - just the briefest of high-level > descriptions would be an improvement on what little we have now! > > On re-reading that page, I see it apparently supported some sort of > user/kernel mode distinction, which might have require a tie-in to the > CPU. (But not necessarily; if there was a flop in the KS11 which stored the > 'CPU mode' bit, it might be automatically cleared on all interrupts. Not sure > how it would have handled traps, though.) > > Even extremely dim memories will be an improvement on the blank canvas we > have now! > > Noel From rminnich at gmail.com Tue Jun 25 02:33:54 2019 From: rminnich at gmail.com (ron minnich) Date: Mon, 24 Jun 2019 09:33:54 -0700 Subject: [TUHS] Any oldtimers remember anything about the KS11 on the -11/20? In-Reply-To: References: <20190622181719.E10E918C0B4@mercury.lcs.mit.edu> Message-ID: ah nvm, yeah, KS11. Wow. That was just about the time I was getting started in this game, memory is so hazy. On Mon, Jun 24, 2019 at 9:21 AM ron minnich wrote: > > just double checking, in case the odd.html had a typo: it was a KS11, > not a KT11-B? Is there any chance there was an error in recollection? > > ron > > On Sat, Jun 22, 2019 at 11:18 AM Noel Chiappa wrote: > > > > This is an appeal to the few really-old-timers (i.e. who used the PDP-11/20 > > version of Unix) on the list to see if they remember _anything_ of the KS11 > > memory mapping unit used on that machine. > > > > Next to nothing is known of the KS11. Dennis' page "Odd Comments and Strange > > Doings in Unix": > > > > https://www.bell-labs.com/usr/dmr/www/odd.html > > > > has a story involving it (at the end), and that is all I've ever been able > > to find out about it. > > > > I don't expect documentation, but I am hoping someone will remember > > _basically_ what it did. My original guess as to its functionality, from that > > page, was that it's not part of the CPU, but a UNIBUS device, placed between > > the UNIBUS leaving the CPU, and the rest of the the bus, which perhaps mapped > > addresses around (and definitely limited user access to I/O page addresses). > > > > It might also have mapped part of the UNIBUS space which the -11/20 CPU _can_ > > see (i.e. in the 0-56KB range) up to UNIBUS higher addresses, where 'extra' > > memory is configured - but that's just a guess; but it is an example of the > > kind of info I'd like to find out about it - just the briefest of high-level > > descriptions would be an improvement on what little we have now! > > > > On re-reading that page, I see it apparently supported some sort of > > user/kernel mode distinction, which might have require a tie-in to the > > CPU. (But not necessarily; if there was a flop in the KS11 which stored the > > 'CPU mode' bit, it might be automatically cleared on all interrupts. Not sure > > how it would have handled traps, though.) > > > > Even extremely dim memories will be an improvement on the blank canvas we > > have now! > > > > Noel From jsteve at superglobalmegacorp.com Tue Jun 25 03:04:08 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Tue, 25 Jun 2019 01:04:08 +0800 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190623235400.GA1805@mit.edu> References: <20190623220856.AADD018C0CF@mercury.lcs.mit.edu> <20190623235400.GA1805@mit.edu> Message-ID: So with that Mt Xinu Mach/386 thing I thought I’d take another stab at building the source from the CSRG CD-ROM set. The makefiles from the i386 version are so cut up it’s a seemingly hopeless mess. I took the mach.kernel.mk directory and tried to build of 4.3BSD UWisc, but that went nowhere quick as the tool chain just isn’t right and there is a bunch of VAX stuff missing. It looks more complete for the SUN-3. So in a fit of rage, I copied the bare needed i386 files into the SUN-3 tree and it actually compiles. ROUGH notes…. Mach25 is where I put the 386 directory & running from inside the mach.kernel.mk directory. mv ../mach25/sys/i386 . mv ../mach25/sys/i386at . mv ../mach25/sys/mach/i386 mach mv ../mach25/sys/sysV . cp ../mach25/sys/conf/*i386* conf ln -s i386 machine ln -s mach/i386 mach/machine cp Makeconf Makeconf-orig vi Makeconf ------ bash$ diff Makeconf-orig Makeconf 85c85,86 < CONFIG = ${${TARGET_MACHINE}_CONFIG?${${TARGET_MACHINE}_CONFIG}:STD+ANY+EXP} --- > #CONFIG = ${${TARGET_MACHINE}_CONFIG?${${TARGET_MACHINE}_CONFIG}:STD+ANY+EXP} > CONFIG = STD+WS-afs-nfs 89a91 > #SOURCEDIR = /usr/src/mach.kernel.mk 91c93,95 < OBJECTDIR = ../../../obj/@sys/kernel/${KERNEL_SERIES} --- > #OBJECTDIR = ../../../obj/@sys/kernel/${KERNEL_SERIES} > #OBJECTDIR = /usr/src/mach.kernel.mk/obj > OBJECTDIR = ./obj ------ vi Makefile include ../../${MAKETOP}Makefile-common to include ${MAKETOP}Makefile-common vi src/config/Makefile include ../../${MAKETOP}Makefile-common to include ${MAKETOP}Makefile-common mkdir obj make And it actually compiled… cc -c -O -MD -I. -I../../sys -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/pic_isa.c; ; ; cc -c -O -MD -I. -I../../sys -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/rtc.c; ; ; cc -c -O -MD -I. -I../../sys -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/wt.c; ; ; cc -c -O -MD -I. -I../../sys -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../machine/swapgeneric.c (null command) (null command) (null command) loading vmunix.sys rearranging symbols text data bss dec hex 479200 47980 125520 652700 9f59c ln vmunix.sys vmunix md -f -d `ls *.d` ln -s STD+WS-afs-nfs/vmunix KERNEL.STD+WS-afs-nfs Naturally the Mt Xinu bootloader won’t run it. 479200+47980+125520[+40968+42516] That’s all I get out of it. I’ll have to mess with it later on as it’s getting late, but I thought it was worth sharing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aek at bitsavers.org Tue Jun 25 06:56:13 2019 From: aek at bitsavers.org (Al Kossow) Date: Mon, 24 Jun 2019 13:56:13 -0700 Subject: [TUHS] OT: Need help getting old 9 track tapes read In-Reply-To: <201904280845.x3S8j7GQ008565@freefriends.org> References: <201904280845.x3S8j7GQ008565@freefriends.org> Message-ID: On 4/28/19 1:45 AM, arnold at skeeve.com wrote: > There was discussion here a while back about services that will > recover such tapes and so on. But I didn't save any of that information. Chuck Guzis does excellent work and recommend that you use his services. I hope the Georgia Tech tapes are recoverable since I have been trying to find it for a long time as well. From michael at kjorling.se Tue Jun 25 07:07:48 2019 From: michael at kjorling.se (Michael =?utf-8?B?S2rDtnJsaW5n?=) Date: Mon, 24 Jun 2019 21:07:48 +0000 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <20190624210748.lpkrouguj4sy7b7b@h-174-65.A328.priv.bahnhof.se> On 23 Jun 2019 18:02 -0600, from tuhs at minnie.tuhs.org (Grant Taylor via TUHS): > I wonder what the requirements are for INTERLNK & INTERSVR. I don't know if > they would go back to (MS-)DOS 2.11 or not. The OS/2 Museum claims at [1] that the network redirector was added in 3.0. I'd expect INTERLNK/INTERSVR to need redirector support, and if that assumption is correct, those wouldn't work on any pre-3.0 versions of Microsoft's DOS (whether MS-DOS or PC-DOS), and support may be spotty on versions earlier than the one where they were introduced depending on which exact features are used. Also, a cursory glance at a MS-DOS 3.1 user's manual and user's reference that I have lying around does not list INTERLNK/INTERSRV in the command reference, so those would presumably have come later than that. Wikipedia appears to confirm this at [2] by claiming they were introduced in PC-DOS 5.02 / MS-DOS 6.0; the cited source at [3], [4] simply says "6.0 and later" without specifying a variant. So, almost certainly not that easy, unfortunately. [1] http://www.os2museum.com/wp/dos/dos-3-0-3-2/ [2] https://en.wikipedia.org/wiki/List_of_DOS_commands#INTERSVR_and_INTERLNK [3] http://www.easydos.com/interlink.html [4] http://www.easydos.com/intersvr.html -- Michael Kjörling • https://michael.kjorling.se • michael at kjorling.se “The most dangerous thought that you can have as a creative person is to think you know what you’re doing.” (Bret Victor) From usotsuki at buric.co Tue Jun 25 07:30:52 2019 From: usotsuki at buric.co (Steve Nickolas) Date: Mon, 24 Jun 2019 17:30:52 -0400 (EDT) Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: <20190624210748.lpkrouguj4sy7b7b@h-174-65.A328.priv.bahnhof.se> References: <20190624210748.lpkrouguj4sy7b7b@h-174-65.A328.priv.bahnhof.se> Message-ID: On Mon, 24 Jun 2019, Michael Kjörling wrote: > The OS/2 Museum claims at [1] that the network redirector was added in > 3.0. I'd expect INTERLNK/INTERSVR to need redirector support, and if > that assumption is correct, those wouldn't work on any pre-3.0 > versions of Microsoft's DOS (whether MS-DOS or PC-DOS), and support > may be spotty on versions earlier than the one where they were > introduced depending on which exact features are used. A prototype was introduced in 3.0; it wasn't exactly usable until 3.1 iirc. > Also, a cursory glance at a MS-DOS 3.1 user's manual and user's > reference that I have lying around does not list INTERLNK/INTERSRV in > the command reference, so those would presumably have come later than > that. Wikipedia appears to confirm this at [2] by claiming they were > introduced in PC-DOS 5.02 / MS-DOS 6.0; the cited source at [3], [4] > simply says "6.0 and later" without specifying a variant. I can confirm the presence of Interlnk in PC DOS 5.02 as well as MS-DOS 6.00 (and this is why I specifically mentioned those versions). I've done a lot of research on MS-DOS/PC DOS history. ;p Interlnk does have a way, as I mentioned, to copy itself over a serial cable. I suppose it probably relies on CTTY and DEBUG or something. -uso. From gregg.drwho8 at gmail.com Tue Jun 25 07:59:06 2019 From: gregg.drwho8 at gmail.com (Gregg Levine) Date: Mon, 24 Jun 2019 17:59:06 -0400 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: <20190624210748.lpkrouguj4sy7b7b@h-174-65.A328.priv.bahnhof.se> Message-ID: Hello! And I can confirm that it does work on DOS 3.30. I wasn't aware that it had that sort of history, but such are things. I've used it to send things from a small portable to a laptop the laptop ran DOS 6.22, and the portable was running 3.30 Mary Anne you own an AT&T PC 6300? Wow, I got my start along the regular desktop market with one. And their 80286 version as well. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Mon, Jun 24, 2019 at 5:31 PM Steve Nickolas wrote: > > On Mon, 24 Jun 2019, Michael Kjörling wrote: > > > The OS/2 Museum claims at [1] that the network redirector was added in > > 3.0. I'd expect INTERLNK/INTERSVR to need redirector support, and if > > that assumption is correct, those wouldn't work on any pre-3.0 > > versions of Microsoft's DOS (whether MS-DOS or PC-DOS), and support > > may be spotty on versions earlier than the one where they were > > introduced depending on which exact features are used. > > A prototype was introduced in 3.0; it wasn't exactly usable until 3.1 > iirc. > > > Also, a cursory glance at a MS-DOS 3.1 user's manual and user's > > reference that I have lying around does not list INTERLNK/INTERSRV in > > the command reference, so those would presumably have come later than > > that. Wikipedia appears to confirm this at [2] by claiming they were > > introduced in PC-DOS 5.02 / MS-DOS 6.0; the cited source at [3], [4] > > simply says "6.0 and later" without specifying a variant. > > I can confirm the presence of Interlnk in PC DOS 5.02 as well as MS-DOS > 6.00 (and this is why I specifically mentioned those versions). I've done > a lot of research on MS-DOS/PC DOS history. ;p > > Interlnk does have a way, as I mentioned, to copy itself over a serial > cable. I suppose it probably relies on CTTY and DEBUG or something. > > -uso. From lm at mcvoy.com Tue Jun 25 10:06:30 2019 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 24 Jun 2019 17:06:30 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> Message-ID: <20190625000630.GA7655@mcvoy.com> All interesting points but messy code is messy code. I had a bunch of the FreeBSD folks over here for a BBQ a couple days ago (they want you at the next one Clem). We got to talking about Mach and someone told me that in the FreeBSD tree the Mach code was gone through and 60% of was deleted and it still worked. It just seems like the Mach folks wanted to try this, and that, and then next thing and never went back to clean up the mess. Even after that FreeBSD cleanup the code looks like crap to me. Let me define "crap" by defining good code. Good code is very stylized, you learn part of the system and you get the style and you can predict what the next part of the system looks like and when you get there, yeah, the prediction and the code matches. That's what SunOS 4.x was like when I sort of "got it". I just guessed what I'd see and that was what I saw. The Mach code, IMHO, completely fails that test. You can't predict anything, there are layers of code that don't seem to do anything, you have to tag through them to get to the code that does something. There are all sorts of helper functions (after the cleanup!) that don't seem to be used. I get that it was a big effort and I get that it was research code, but Clem, I can point you to code that I wrote as a grad student and it is SunOS like. You can predict what stuff looks like. That's just clean code. Mach was never like that, I'm sorry, but it wasn't. Was it/is it useful? Yeah. Would I like to work in that code if I had the juice? No, hard pass. And I was in it recently enough to submit a patch to the FreeBSD tree, trivial patch but you have to read a bunch of code to get to that trivial patch. It wasn't fun reading that code. Maybe I'm just old and washed up but maybe I know clean code when I see it. On Sun, Jun 23, 2019 at 09:52:33PM +0000, Clem Cole wrote: > A couple of thoughts... > > 1.) Mach2.5 and 3.0 was part of an *extremely successful research project* > but also did suffer from issues associated with being so. CSRG BTW *was > not a research project*, contrary to its name, it was a support contract > for DARPA since BTL would not support UNIX the way DEC, IBM, *et al*, did > with their products. The reality is that Mach more than Thoth**), V > Kernel, QNX, Sol, Chrous, Minux or any other uK of the day, changed the way > people started to think about building an OS. Give Rashid and team > credit - it showed some extremely interesting and aggressive ideas could be > made to work. > > 2.) Comparing Mach with BSD or SunOS 4.3 is not really a valid comparison. > They had different goals and targets. Comparing Ultrix, Tru64, or Mac > OSx with SunOS (or Solaris for that matter) is fair. They all were > products. They needed to be clean, maintainable and extensible [as Larry > likes to point out, Sun traded a great deal of that away for political > reasons with Solaris]. But the bottom line, you are comparing a test > vehicle to a production one. And I while I agree with Larry on the > internals and a lot of the actual code in Mach, I was always pretty damned > impressed with what they crammed into a 5 lbs bag in a very short time. > > 3.) Mach2.5/386/Vax/etc.. << OSF/1 386 the later is similar what MtUnix > shipped. Both are 'hybrid' kernels. But while MtUnix created a product > with it, they were too small to do what DEC would later do. But the > investment was greater than I think they could really afford. > > 4.) Mach 3.0 was from CMU, Mach 4.0 (which is still sort of available) was > from the OSF/1 [this is a pure uK]. > > 3.) DEC OSF/1 (for MIPS) << Tru64 (for Alpha) - *a.k.a.* Digital UNIX - yes > both started with a Mach 2.5 hybrid kernel and the later was mostly the > same as OSF/1386, and both supported the Mach2.5 kernel message system - > but DEC's team rewrote darned near everything in the kernel -- which in > fact was both a bless and a curse [more in a minute]. > > Ok, so why have I bothered with all this mess. The fact is Mach was able > to be turned into a product, both Apple and DEC did it. Apple had the > advantage of starting with NextOS which (along with machTen) was the first > short at making a 'product' out of it. But they invested a lot over the > years and incrementally changed it. Enough to create XNU. DEC was a > different story (which I lived a bit of personally). > > The DEC PMAX (mips) and the Intel 386 were the first references from OSF. > OSF had an issue. IBM was supposed to deliver an OS, but for a number of > reasons was not ready when OSF needed something. CMU had something that > was 'good enough.' > > This is probably where Larry and I differ a little on shipping code. I'm a > great believer figure out one solid goal and nailing it, and the rest is > 'good enough' - i.e. fix in version 2. I think OSF/1 as a reference > system nailed it. Their job was get something out as a starting base that > ran on at least 2 workstations (and one server - which IIRC was an HP, > maybe an Encore box) but able to be shipped on an AT&T V.3 unlimited > license [which IBM had brought to the table]. The fact that they did not > spend a lot of time cleaning up about CMU at this stage was not their job. > The kernel had to be good enough - and it was (Larry might argue Mach2.5 > vs SunOS 4.3 it was not as good technically - and he might be right - but > that was not their job). > > So DEC gets a new code based. They have Ultrix (a product) for the PMAX. > OSF has released the reference port. From a kernel code quality > standpoint, OSF1 1.0/PMAX < Ultrix/RISC 4.5. They also are moving to a > new 64-bit processor that is not going to run either VAX or PMAX binaries ( > *i.e.* you will have to recompile). Two technical decisions and one > marketing one were made at the management level that would later doom > Tru64. First, it was agreed that Tru64 was going to be 'pure 64-bit' and > it turned out >>none of the ISVs had clean code. Moreover, there were no > tools to help find 64-bit issues. This single choice cost DEC 3 years in > the ability to ship Tru64/Alpha. The second choice was DEC's team decided > to re-write OSF/1 subsystem by subsystem. The argument would be: the XXX > system sucks. It will never scale on a 64-bit system and it will not work > for clusters. XXX was at least Memory Management, Terminal Handler, Basic > I/O, SCSI, File System. The >>truth<< is each of these was actually right > in the small, they did suck. But the fact is, they all were good enough > to get the system out the door and get customers and ISV's starting the > process of using the system. Yes, Megasafe is an excellent FS, but UFS > was good enough to start. The marketing decision BTW, that not to ship > Tru64/PMAX. Truth is it was running internally. But Marketing held that > Tru64 was the sexy cool thing and did not want to make it available. The > argument was they would have to support it. But the truth is that asking > ISV's and customers to switch Architecture and OS in one jump, opened the > door to consider Sun or HP (and with Tru64/Alpha's ecosystem taking 3 more > years, people left DEC). > > > > > > ** Mike Malcolm was the primary author of Thoth as his PhD from Waterloo. > HIs officemate, Kelly Booth (of the 'Graphics Killer-Bs) had a tee-shirt > made that exhaled: 'Thoth Thucks' and gave them to the lot of the Waterloo > folks. BTW, Mike and Cheridon would later go to Stanford and create V. > Two of their students would create QNX with still lives. > > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From tytso at mit.edu Tue Jun 25 10:31:21 2019 From: tytso at mit.edu (Theodore Ts'o) Date: Mon, 24 Jun 2019 20:31:21 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190625000630.GA7655@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> Message-ID: <20190625003120.GA28608@mit.edu> On Mon, Jun 24, 2019 at 05:06:30PM -0700, Larry McVoy wrote: > All interesting points but messy code is messy code. I had a bunch of the > FreeBSD folks over here for a BBQ a couple days ago (they want you at the > next one Clem). We got to talking about Mach and someone told me that in > the FreeBSD tree the Mach code was gone through and 60% of was deleted and > it still worked. It just seems like the Mach folks wanted to try this, > and that, and then next thing and never went back to clean up the mess. Welcome to academic/research code. :-) I'm reminded of a description of the Coda File System by Peter Braam; he said that it was irretrivably tainted by a dozen Ph.D. students working on their thesis. Naturally, once they had done the necessary work for them to get their doctorate, any interest in doing the necessary code cleanup for their various experimental efforts evaporated. He tried cleaning it up, and eventually gave up and decided to the only solution was a rewrite and redesign from scratch.... I used to be annoyed when professors and their graduate students would do their work based on same ancient version of Linux. (In general, the last version of Linux dating from the professor had time to hack on code.) I later decided that was a feature, not a bug, because it meant no one would be tempted to take academic code and try to put it into the mainline kernel... - Ted From lm at mcvoy.com Tue Jun 25 10:45:23 2019 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 24 Jun 2019 17:45:23 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190625003120.GA28608@mit.edu> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> Message-ID: <20190625004523.GB7655@mcvoy.com> On Mon, Jun 24, 2019 at 08:31:21PM -0400, Theodore Ts'o wrote: > On Mon, Jun 24, 2019 at 05:06:30PM -0700, Larry McVoy wrote: > > All interesting points but messy code is messy code. I had a bunch of the > > FreeBSD folks over here for a BBQ a couple days ago (they want you at the > > next one Clem). We got to talking about Mach and someone told me that in > > the FreeBSD tree the Mach code was gone through and 60% of was deleted and > > it still worked. It just seems like the Mach folks wanted to try this, > > and that, and then next thing and never went back to clean up the mess. > > Welcome to academic/research code. :-) Like I said, I can point anyone at code I wrote as a grad student that while I'm not proud of the style, it has style and it is clean. Just because you are a grad student that doesn't excuse messy code. If you write messy code then you're a bad hire. > I'm reminded of a description of the Coda File System by Peter Braam; > he said that it was irretrivably tainted by a dozen Ph.D. students > working on their thesis. Naturally, once they had done the necessary > work for them to get their doctorate, any interest in doing the > necessary code cleanup for their various experimental efforts > evaporated. Yeah, like I said, bad hires. People who are good coders take pride in their work. They put in the extra time to clean it up. That's why SunOS 4.x was a nice code base, everyone pulled their weight to make it be so. I get that that is unusual but it is super nice when it happens. And I wasn't trying to belittle the Mach effort, I'm impressed with what it does. I am most definitely belittling the people who did it. Not because of what they accomplished, that's cool, but they didn't care enough to clean it up. That sucks. And that means they suck as professional programmers. I'm a canoe guy and any canoe guy knows that the ultimate insult is "I wouldn't want him in my boat." Well, I wouldn't want the Mach people, for all their talent and accomplishments, on my team. I like people who get the job done, all the way done. Code is clean, the docs cover the code, the test cases are there. Done done. I just don't buy that academic/research code needs to be bad. If the people doing it are people you'd want to hire, they get it done done. I get that I'm describing a unicorn but I was one, and I'm not that great. Doesn't seem so much to ask that people give a shit and do it right. From rich.salz at gmail.com Tue Jun 25 11:00:28 2019 From: rich.salz at gmail.com (Richard Salz) Date: Mon, 24 Jun 2019 21:00:28 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190625004523.GB7655@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> Message-ID: Is this really the kind of commentary appropriate for this list? I mean I'm new here, but... -------------- next part -------------- An HTML attachment was scrubbed... URL: From khm at sciops.net Tue Jun 25 10:55:28 2019 From: khm at sciops.net (Kurt H Maier) Date: Mon, 24 Jun 2019 17:55:28 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190625004523.GB7655@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> Message-ID: <20190625005528.GA11929@wopr> On Mon, Jun 24, 2019 at 05:45:23PM -0700, Larry McVoy wrote: > > Like I said, I can point anyone at code I wrote as a grad student that > while I'm not proud of the style, it has style and it is clean. Just > because you are a grad student that doesn't excuse messy code. If you > write messy code then you're a bad hire. > This is akin to complaining about laborers not polishing railroad spikes before hammering them into the sleepers. It's hard enough to find people willing to touch computers at all for grad-student "wages," much less ones both capable & willing to be held to production-code standards on budgets that barely put food on the table, one fiscal year at a time. The systems engineer side of me really wants to agree with you, but the state of academic computing has not been amenable to this standard for some years -- ever, in terms of my career. We're lucky to get working code, full stop. There is no funding for *nice* code. Funding directed toward nice code will be cut next quarter. People advocating for funding for nice code will find their annual performance review is suddenly a multi-player game. Some institutions are better than others. The place I work now has explicit policies regarding "sustainable" software development, and it's an absolute delight ... which does not exist in many (most?) places. khm From rminnich at gmail.com Tue Jun 25 11:08:30 2019 From: rminnich at gmail.com (ron minnich) Date: Mon, 24 Jun 2019 18:08:30 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") Message-ID: I always wondered who wrote this, anyone know? I have my suspicions but ... ".SH BUGS .I Ptrace is unique and arcane; it should be replaced with a special file which can be opened and read and written. The control functions could then be implemented with .IR ioctl (2) calls on this file. This would be simpler to understand and have much higher performance." it's interesting in the light of the later plan 9 proc interface. ron From dave at horsfall.org Tue Jun 25 11:17:05 2019 From: dave at horsfall.org (Dave Horsfall) Date: Tue, 25 Jun 2019 11:17:05 +1000 (EST) Subject: [TUHS] Testing... Message-ID: Sorry for the noise, but this list is awfully quiet. -- Dave From mckusick at mckusick.com Tue Jun 25 12:27:20 2019 From: mckusick at mckusick.com (Kirk McKusick) Date: Mon, 24 Jun 2019 19:27:20 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") Message-ID: <201906250227.x5P2RKZs083727@chez.mckusick.com> > From: ron minnich > To: TUHS main list > Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and > arcane") > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > I always wondered who wrote this, anyone know? I have my suspicions but ... > > ".SH BUGS > .I Ptrace > is unique and arcane; it should be replaced with a special file which > can be opened and read and written. The control functions could then > be implemented with > .IR ioctl (2) > calls on this file. This would be simpler to understand and have much > higher performance." > > it's interesting in the light of the later plan 9 proc interface. > > ron The manual pages were not yet under SCCS, so the best time gap that I can give you is that the above text was added between the release of 3BSD (Nov 1979) and 4.0BSD (Nov 1980). Most likely it was Bill Joy that made that change. Kirk McKusick From gregg.drwho8 at gmail.com Tue Jun 25 13:07:22 2019 From: gregg.drwho8 at gmail.com (Gregg Levine) Date: Mon, 24 Jun 2019 23:07:22 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: Hello! Actually Chris, I found a complete collection of both CMU Mach and the Flux Group Mach, and even MkMach at the FTP2 site for the French OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach directories. In all actuality I first discovered the Mach code base and the binary at the Flux Group offices of the Utah Computer Sciences site. They shut that down around the turn of the century. And once at the Arizona site for their computer sciences site. I believe it is gone as is the CMU one. And Jason I found your Gunkies Wiki with a link to your incredible storage site. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Sun, Jun 23, 2019 at 12:45 AM Chris Hanson wrote: > > Does anyone know whether CMU’s local Mach sources have been preserved? > > I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. > > I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. > > If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. > > — Chris > > Sent from my iPhone From jgevaryahu at hotmail.com Tue Jun 25 13:54:28 2019 From: jgevaryahu at hotmail.com (Jonathan Gevaryahu) Date: Tue, 25 Jun 2019 03:54:28 +0000 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: I'd use something like imagedisk or teledisk or anadisk for reading the diskette; this will also preserve the deleted/unused sectors, the boot sector and the disk filesystem/metadata, while just copying the files off will lose most of this data. On 6/23/2019 7:10 PM, Mary Ann Horton Gmail wrote: > Hunting around through my ancient stuff today, I ran across a 5.25" > floppy drive labeled as having old Usenet maps. These may have > historical interest. > > First off, I don't recognize the handwriting on the disk. It's not > mine. Does anyone recognize it? (pic attached) > > I dug out my AT&T 6300 (XT clone) from the garage and booted it up. > The floppy reads just fine. It has files with .MAP extension, which > are ASCII Usenet maps from 1980 to 1984, and some .BBM files which are > ASCII Usenet backbone maps up to 1987. > > There is also a file whose extension is .GRF from 1983 which claims to > be a graphical Usenet map.  Does anyone have any idea what GRF is or > what this map might be? I recall Brian Reid having a plotter-based > Usenet geographic map in 84 or 85. > > I'd like to copy these files off for posterity. They read on DOS just > fine. Is there a current best practice for copying off files? I would > have guessed I'd need a to use the serial port, but my old PC has DOS > 2.11 (not much serial copying software on it) and I don't have > anything live with a serial port anymore. And it might not help with > the GRF file. > > I took some photos of the screen with the earliest maps (the ones that > fit on one screen.) So it's an option to type things in, at least for > the early ASCII ones. > > Thanks, > >     Mary Ann > > -- Jonathan Gevaryahu AKA Lord Nightmare jgevaryahu at gmail.com jgevaryahu at hotmail.com From lm at mcvoy.com Tue Jun 25 14:18:06 2019 From: lm at mcvoy.com (Larry McVoy) Date: Mon, 24 Jun 2019 21:18:06 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190625005528.GA11929@wopr> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190625005528.GA11929@wopr> Message-ID: <20190625041806.GL7655@mcvoy.com> On Mon, Jun 24, 2019 at 05:55:28PM -0700, Kurt H Maier wrote: > On Mon, Jun 24, 2019 at 05:45:23PM -0700, Larry McVoy wrote: > > > > Like I said, I can point anyone at code I wrote as a grad student that > > while I'm not proud of the style, it has style and it is clean. Just > > because you are a grad student that doesn't excuse messy code. If you > > write messy code then you're a bad hire. > > > > This is akin to complaining about laborers not polishing railroad spikes > before hammering them into the sleepers. It's hard enough to find > people willing to touch computers at all for grad-student "wages," much > less ones both capable & willing to be held to production-code standards > on budgets that barely put food on the table, one fiscal year at a time. It is not about wages, when I was a grad student I got $16K and had to pay tuition and rent and everything else out of that. It's not about money. It's about caring about your craft. I cared, the people I have worked with in industry cared, if they didn't I left. The point I was trying to make was that you can be a student and still be a pro. Or not. The pros care about their craft. The Mach people, in my you-get-what-you-paid-for opinion, were not pros. They got a lot done in a sloppy way and they left a mess. I don't know how to say it more clearly, there are plenty examples of students that wrote clean code. Mach was cool, clean code it was not. From jsteve at superglobalmegacorp.com Tue Jun 25 17:49:57 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Tue, 25 Jun 2019 15:49:57 +0800 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: Well hot on the heels of the SUN-3 version of Mach25 I managed to figure out enough of the wedge issues for the 386 directory on the CSRG CD set and got it to compile! I put up a non HTTPS server on port 8080 for people with http only access to this stuff.. http://vpsland.superglobalmegacorp.com:8080/install/Mach/mach25-i386.tar.gz I apologize for the 404 & password craziness but the whole story is in the 404 page. It’s so annoying, but here we are in the world of anonymous virus scans and skittish data centres. I’m using the aforementioned MtXinu (http://vpsland.superglobalmegacorp.com:8080/install/Mach/MtXinu/) Mach386 to build this stuff. I haven’t looked at cross compiling from anything yet at the moment. Gzip -dc & tar -xvf this somewhere with space (/usr/src?) The Makefile bombs while running config on the source, I don’t immediately see where it fails, but it’s easy enough to just CD into the directory run config , cd out & re-run make… cd mach25-i386 bash# sh build.sh and it'll do the build dance.... cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/pic_isa.c; ; ; cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/rtc.c; ; ; cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/wt.c; ; ; grep -v '^#' ../../machine/symbols.raw | sed 's/^ //' | sort -u > symbols.tmp mv -f symbols.tmp symbols.sort cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../machine/swapgeneric.c (null command) (null command) (null command) vers.config: No such file or directory loading vmunix.sys rearranging symbols text data bss dec hex 442336 46776 115216 604328 938a8 ln vmunix.sys vmunix; ln vmunix vmunix.I386x. md -f -d `ls *.d` So yeah, turns out both trees are buildable! who knew?! It's certainly not easy to figure out or anything close to self explanatory. I had to copy some files from the 'other' SUN-3 complete Mach. -- cp /usr/src/mach25/sys/Makeconf . cp /usr/src/mach25/sys/Makefile . cp /usr/src/mach25/sys/conf/newvers.sh conf To get anywhere with this. So weird that they were missing. I'm working on the boot sector stuff, looks like the stuff I build is too big, and I’m trying to work with the pre-built stuff. mkfs /dev/rfloppy 2880 18 2 4096 512 32 1 dd if=boot.hd of=/dev/rfd0c fsck /dev/rfd0a mount /dev/floppy /mnt I'd like to think I'm getting close. close to something. ... lol I’m not sure if this is so off topic, or noise? Anyways I’ll keep updating unless told otherwise. From: Chris Hanson Sent: Sunday, June 23, 2019 12:46 PM To: tuhs at minnie.tuhs.org Subject: [TUHS] CMU Mach sources? Does anyone know whether CMU’s local Mach sources have been preserved? I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. — Chris Sent from my iPhone -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin.bowling at kev009.com Tue Jun 25 18:00:54 2019 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Tue, 25 Jun 2019 01:00:54 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> Message-ID: Why not? The utility of history isn't just recording or paying reverence for the past, we can also draw conclusions in the current or insights into the future -- "the old new thing" That's certainly why I'm here and invest in computer history. Of course it can be lossy, and victors get more air time. But there's nothing inherently wrong with strong opinions or criticism of the past. Regards, Kevin On Mon, Jun 24, 2019 at 6:01 PM Richard Salz wrote: > > Is this really the kind of commentary appropriate for this list? I mean I'm new here, but... From andreas.grapentin at hpi.uni-potsdam.de Tue Jun 25 17:59:40 2019 From: andreas.grapentin at hpi.uni-potsdam.de (Andreas Grapentin) Date: Tue, 25 Jun 2019 09:59:40 +0200 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: <20190625075940.GA2646@parabola-pocket.localdomain> Amazing, thanks for sharing! Best, Andreas On Tue, Jun 25, 2019 at 03:49:57PM +0800, Jason Stevens wrote: > Well hot on the heels of the SUN-3 version of Mach25 I managed to figure out enough of the wedge issues for the 386 directory on the CSRG CD set and got it to compile! > > I put up a non HTTPS server on port 8080 for people with http only access to this stuff.. > > http://vpsland.superglobalmegacorp.com:8080/install/Mach/mach25-i386.tar.gz > > I apologize for the 404 & password craziness but the whole story is in the 404 page. It’s so annoying, but here we are in the world of anonymous virus scans and skittish data centres. > > I’m using the aforementioned MtXinu (http://vpsland.superglobalmegacorp.com:8080/install/Mach/MtXinu/) Mach386 to build this stuff. I haven’t looked at cross compiling from anything yet at the moment. > > Gzip -dc & tar -xvf this somewhere with space (/usr/src?) > > The Makefile bombs while running config on the source, I don’t immediately see where it fails, but it’s easy enough to just CD into the directory run config , cd out & re-run make… > > cd mach25-i386 > bash# sh build.sh > > and it'll do the build dance.... > > cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/pic_isa.c; ; ; > cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/rtc.c; ; ; > cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../i386at/wt.c; ; ; > grep -v '^#' ../../machine/symbols.raw | sed 's/^ //' | sort -u > symbols.tmp > mv -f symbols.tmp symbols.sort > cc -c -O -MD -DCMU -DINET -DMACH -DAT386 -DCMUCS -DKERNEL -fno-function-cse ../../machine/swapgeneric.c > (null command) > (null command) > (null command) > vers.config: No such file or directory > loading vmunix.sys > rearranging symbols > text data bss dec hex > 442336 46776 115216 604328 938a8 > ln vmunix.sys vmunix; ln vmunix vmunix.I386x. > md -f -d `ls *.d` > > > > So yeah, turns out both trees are buildable! who knew?! It's certainly not easy to figure out or anything close to self explanatory. > > I had to copy some files from the 'other' SUN-3 complete Mach. > > -- > cp /usr/src/mach25/sys/Makeconf . > cp /usr/src/mach25/sys/Makefile . > cp /usr/src/mach25/sys/conf/newvers.sh conf > > > To get anywhere with this. So weird that they were missing. > > I'm working on the boot sector stuff, looks like the stuff I build is too big, and I’m trying to work with the pre-built stuff. > > > mkfs /dev/rfloppy 2880 18 2 4096 512 32 1 > dd if=boot.hd of=/dev/rfd0c > fsck /dev/rfd0a > mount /dev/floppy /mnt > > I'd like to think I'm getting close. close to something. ... lol > > I’m not sure if this is so off topic, or noise? Anyways I’ll keep updating unless told otherwise. > > From: Chris Hanson > Sent: Sunday, June 23, 2019 12:46 PM > To: tuhs at minnie.tuhs.org > Subject: [TUHS] CMU Mach sources? > > Does anyone know whether CMU’s local Mach sources have been preserved? > > I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. > > I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. > > If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. > > — Chris > > Sent from my iPhone > -- ------------------------------------------------------------------------------ Andreas Grapentin, M.Sc. Research Assistant @ Hasso-Plattner-Institut Operating Systems and Middleware Group www.dcl.hpi.uni-potsdam.de Phone: +49 (0) 331 55 09-238 Fax: +49 (0) 331 55 09-229 my GPG Public Key: https://files.grapentin.org/.gpg/public.key ------------------------------------------------------------------------------ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From kevin.bowling at kev009.com Tue Jun 25 18:15:20 2019 From: kevin.bowling at kev009.com (Kevin Bowling) Date: Tue, 25 Jun 2019 01:15:20 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: Thanks for the link. This is overall a great find, and you almost totally made my night, but the doc/unpublished directory seems to be pruned versus what other docs in here state :(. In particular I'm looking for the stuff mentioned in ftp://ftp2.fr.openbsd.org/pub/mach/cmu/FAQ/rs6k_announce On Mon, Jun 24, 2019 at 8:08 PM Gregg Levine wrote: > > Hello! > Actually Chris, I found a complete collection of both CMU Mach and the > Flux Group Mach, and even MkMach at the FTP2 site for the French > OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach > directories. > > In all actuality I first discovered the Mach code base and the binary > at the Flux Group offices of the Utah Computer Sciences site. They > shut that down around the turn of the century. And once at the Arizona > site for their computer sciences site. I believe it is gone as is the > CMU one. > > And Jason I found your Gunkies Wiki with a link to your incredible > storage site. > ----- > Gregg C Levine gregg.drwho8 at gmail.com > "This signature fought the Time Wars, time and again." > > On Sun, Jun 23, 2019 at 12:45 AM Chris Hanson > wrote: > > > > Does anyone know whether CMU’s local Mach sources have been preserved? > > > > I’m not just talking about MK84.default.tar.Z and so on, I’m talking about all the bits of Mach that were used on cluster systems on campus, prior to the switch to vendor UNIX. > > > > I know at least one person who had complete MacMach sources for the last version, but threw out the backup discs with the sources in the process of moving. So I know they exist. > > > > If nothing else, CMU did provide other sites their UX source package (eg UX42), which was the BSD single server environment. So I know that has to be out there, somewhere. > > > > — Chris > > > > Sent from my iPhone From krewat at kilonet.net Tue Jun 25 22:11:47 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Tue, 25 Jun 2019 08:11:47 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> Message-ID: <71d9dfd8-cca9-e842-d2f8-a706cf118d73@kilonet.net> There's nothing like sitting in a room listening to your elders argue over minutiae. Or huge issues such as code cleanliness ;) I didn't even graduate high school, but one of the first things my mentor/boss did before I started working for him as a consultant was to comment, and write clean code. And that was on TOPS-10, using MACRO-10. I've recently been exposed to a grad student's C++ code, and between no error checking and outright lack of formatting or any other care in the world for "clean" code, his stuff is atrocious. His casts from one type to another to another to another through nested function calls makes my skin crawl. ak On 6/25/2019 4:00 AM, Kevin Bowling wrote: > Why not? The utility of history isn't just recording or paying > reverence for the past, we can also draw conclusions in the current or > insights into the future -- "the old new thing" > > That's certainly why I'm here and invest in computer history. Of > course it can be lossy, and victors get more air time. But there's > nothing inherently wrong with strong opinions or criticism of the > past. > > Regards, > Kevin > > On Mon, Jun 24, 2019 at 6:01 PM Richard Salz wrote: >> Is this really the kind of commentary appropriate for this list? I mean I'm new here, but... From krewat at kilonet.net Tue Jun 25 22:17:26 2019 From: krewat at kilonet.net (Arthur Krewat) Date: Tue, 25 Jun 2019 08:17:26 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <71d9dfd8-cca9-e842-d2f8-a706cf118d73@kilonet.net> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <71d9dfd8-cca9-e842-d2f8-a706cf118d73@kilonet.net> Message-ID: <99863c31-c781-2d75-3502-5e1390011bb3@kilonet.net> That should have read *teach me to comment, and write clean code. On 6/25/2019 8:11 AM, Arthur Krewat wrote: > was to comment, and write clean code. From ckeck at texoma.net Tue Jun 25 21:21:45 2019 From: ckeck at texoma.net (ckeck at texoma.net) Date: Tue, 25 Jun 2019 06:21:45 -0500 Subject: [TUHS] Floppy to modern files for Usenet maps In-Reply-To: References: Message-ID: <4843FA5B-4468-41FC-B89A-93D552D62E1B@texoma.net> Kermit used to exist for a great many systems, including DOS. A Pi might get that installed via apt-get, or compiled from scratch (might have to do that soon for some other project). As far as connectivity goes, places like Frys sell USB-RS232 Adapters and null-modem cables, means one can avoid messing with the Pi’s IO bits. Alternatively you could try uucp, but that requires more configuration. Von meinem iPhone gesendet > Am 24.06.2019 um 22:54 schrieb Jonathan Gevaryahu : > > I'd use something like imagedisk or teledisk or anadisk for reading the > diskette; this will also preserve the deleted/unused sectors, the boot > sector and the disk filesystem/metadata, while just copying the files > off will lose most of this data. > >> On 6/23/2019 7:10 PM, Mary Ann Horton Gmail wrote: >> Hunting around through my ancient stuff today, I ran across a 5.25" >> floppy drive labeled as having old Usenet maps. These may have >> historical interest. >> >> First off, I don't recognize the handwriting on the disk. It's not >> mine. Does anyone recognize it? (pic attached) >> >> I dug out my AT&T 6300 (XT clone) from the garage and booted it up. >> The floppy reads just fine. It has files with .MAP extension, which >> are ASCII Usenet maps from 1980 to 1984, and some .BBM files which are >> ASCII Usenet backbone maps up to 1987. >> >> There is also a file whose extension is .GRF from 1983 which claims to >> be a graphical Usenet map. Does anyone have any idea what GRF is or >> what this map might be? I recall Brian Reid having a plotter-based >> Usenet geographic map in 84 or 85. >> >> I'd like to copy these files off for posterity. They read on DOS just >> fine. Is there a current best practice for copying off files? I would >> have guessed I'd need a to use the serial port, but my old PC has DOS >> 2.11 (not much serial copying software on it) and I don't have >> anything live with a serial port anymore. And it might not help with >> the GRF file. >> >> I took some photos of the screen with the earliest maps (the ones that >> fit on one screen.) So it's an option to type things in, at least for >> the early ASCII ones. >> >> Thanks, >> >> Mary Ann >> >> > > -- > Jonathan Gevaryahu AKA Lord Nightmare > jgevaryahu at gmail.com > jgevaryahu at hotmail.com > From cmhanson at eschatologist.net Wed Jun 26 04:18:01 2019 From: cmhanson at eschatologist.net (Chris Hanson) Date: Tue, 25 Jun 2019 11:18:01 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: On Jun 24, 2019, at 8:07 PM, Gregg Levine wrote: > > Actually Chris, I found a complete collection of both CMU Mach and the > Flux Group Mach, and even MkMach at the FTP2 site for the French > OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach > directories. Thanks for this, but it’s just the stuff that was made publicly available by these groups. It’s useful, especially to have via FTP (easier to sync), but it doesn’t cover things like UX42 (the BSD atop Mach that CMU deployed to cluster workstations). -- Chris From norman at oclsc.org Wed Jun 26 05:33:22 2019 From: norman at oclsc.org (Norman Wilson) Date: Tue, 25 Jun 2019 15:33:22 -0400 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") Message-ID: <1561491205.19116.for-standards-violators@oclsc.org> It's interesting that this comment about ptrace was written as early as 1980. Ron Minnich's reference to Plan 9 /proc misses the mark, though. By the time Plan 9 was written, System V already had /proc; see https://www.usenix.org/sites/default/files/usenix_winter91_faulkner.pdf And as the authors say, the idea actually dates back to Tom Killian's /proc in Research UNIX. I don't know when Tom's code first went live, but I first heard about it by seeing it in action on my first visit to Bell Labs in early 1984, and it was described in public in a talk at the Summer 1984 USENIX conference in Salt Lake City. I cannot quickly find an online copy of the corresponding paper; pointers appreciated. (Is there at least an online index of BTL CSTRs? The big search engine run by the place that still has some 1127 old-timers can't find that either.) As for ptrace itself, I heartily agree that /proc made it obsolete. So did everyone else in 1127 when I was there, but nobody wanted to update adb and sdb, which were big messes inside. So I did, attempting a substantial internal makeover of adb to ease making versions for different systems and even cross-versions, but just a quick hack for sdb. Once I'd done that and shipped the new adb and sdb binaries to all our machines, I removed the ptrace call from the kernel. It happened that in the Eighth (or was it Ninth by then? I'd have to dig out notes to find out) Edition manual, ptrace(2) was on two facing pages. To celebrate, I glued said pages together in the UNIX Room's copy of the manual. Would it were so easy to take out the trash today. Norman Wilson Toronto ON From bakul at bitblocks.com Wed Jun 26 05:42:07 2019 From: bakul at bitblocks.com (Bakul Shah) Date: Tue, 25 Jun 2019 12:42:07 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: <1561491205.19116.for-standards-violators@oclsc.org> References: <1561491205.19116.for-standards-violators@oclsc.org> Message-ID: On Jun 25, 2019, at 12:33 PM, Norman Wilson wrote: > > It's interesting that this comment about ptrace was written > as early as 1980. > > Ron Minnich's reference to Plan 9 /proc misses the mark, though. > By the time Plan 9 was written, System V already had /proc; see > > https://www.usenix.org/sites/default/files/usenix_winter91_faulkner.pdf > > And as the authors say, the idea actually dates back to Tom Killian's > /proc in Research UNIX. I don't know when Tom's code first went > live, but I first heard about it by seeing it in action on my first > visit to Bell Labs in early 1984, and it was described in public in > a talk at the Summer 1984 USENIX conference in Salt Lake City. > I cannot quickly find an online copy of the corresponding paper; > pointers appreciated. (Is there at least an online index of BTL > CSTRs? The big search engine run by the place that still has > some 1127 old-timers can't find that either.) http://lucasvr.gobolinux.org/etc/Killian84-Procfs-USENIX.pdf From gregg.drwho8 at gmail.com Wed Jun 26 06:23:11 2019 From: gregg.drwho8 at gmail.com (Gregg Levine) Date: Tue, 25 Jun 2019 16:23:11 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: Hello! And oddly enough the Finnish site ftp://nic.funet.fi maintains a collection of Mach related items at ftp://nic.funet.fi/pub/doc/OS/Mach/ and and at ftp://nic.funet.fi/pub/mach/ and a place named LYX contains Mach at ftp://ftp.lyx.org/pub/mach/ Ideally it is just a duplicate of the first one from earlier. And then Google gets lost. It also includes several hits to Jason's work, but after that Google gets lost. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Tue, Jun 25, 2019 at 2:18 PM Chris Hanson wrote: > > On Jun 24, 2019, at 8:07 PM, Gregg Levine wrote: > > > > Actually Chris, I found a complete collection of both CMU Mach and the > > Flux Group Mach, and even MkMach at the FTP2 site for the French > > OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach > > directories. > > Thanks for this, but it’s just the stuff that was made publicly available by these groups. It’s useful, especially to have via FTP (easier to sync), but it doesn’t cover things like UX42 (the BSD atop Mach that CMU deployed to cluster workstations). > > -- Chris > > From clemc at ccc.com Wed Jun 26 06:35:25 2019 From: clemc at ccc.com (Clem Cole) Date: Tue, 25 Jun 2019 20:35:25 +0000 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: <1561491205.19116.for-standards-violators@oclsc.org> References: <1561491205.19116.for-standards-violators@oclsc.org> Message-ID: On Tue, Jun 25, 2019 at 7:34 PM Norman Wilson wrote: > It's interesting that this comment about ptrace was written > as early as 1980. > Indeed - that seems to be really strange. I wonder of the Man page was written later. > > I don't know when Tom's code first went > live, but I first heard about it by seeing it in action on my first > visit to Bell Labs in early 1984, and it was described in public in > a talk at the Summer 1984 USENIX conference in Salt Lake City. > Ditto. The 84 paper was the first I knew about it but .... It's possible Tom was messing with it before then. Joy spent a couple of Summers in NJ but I've forgotten when. But if it was being talked about/prototyped in the summer of '79, he might have known by 1980. But I find that unlikely. I really don't remember people going /proc crazy until after the '84 paper. The other minor thing missing was the VFS/File System Switch layer. Peter had not put FSS into Research 8. What I don't remember is which came first Peter's work for Tom. The RFS guys would use Peter's work for V.3. I used it something similar for EFS after reading about it and the NFS/EFS are '85 USENIX. Somebody at Sun did VFS, which was better than FSS, although later we came to conclusion both had advantages and disadvantages and a true i-node interposition layer was best so you could really want to do FS stacking. But by that time, the damage was done, and people had gone FS crazy. Since Sun gave away NFS and Peter's work was tied up in either Research 8 or V.3 (i.e. AT&T licensing), VFS won. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Wed Jun 26 09:52:30 2019 From: rminnich at gmail.com (ron minnich) Date: Tue, 25 Jun 2019 16:52:30 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: <1561491205.19116.for-standards-violators@oclsc.org> References: <1561491205.19116.for-standards-violators@oclsc.org> Message-ID: On Tue, Jun 25, 2019 at 12:34 PM Norman Wilson wrote: > > It's interesting that this comment about ptrace was written > as early as 1980. > > Ron Minnich's reference to Plan 9 /proc misses the mark, though. your comment about my comment misses the mark; I was not talking about the origins of /proc. This is probably because I was not clear and probably because few people realize that the plan 9 process debugging interface was strings written and read to/from /proc//[various files], rather than something like ptrace. The first time I saw that debug-interface-in-proc in plan 9, it made me think back to the 4.1c bsd manual ptrace comment, and I wondered if there was any path that led from this man page entry to the ideas in the plan 9 methods. I actually implemented the plan 9 debug model in linux back around 2007, but was pretty sure getting it upstream would never happen, so let it die. ron From dave at horsfall.org Wed Jun 26 10:26:31 2019 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 26 Jun 2019 10:26:31 +1000 (EST) Subject: [TUHS] Testing... Or, Why I Did It My Way (tm) In-Reply-To: References: Message-ID: On Tue, 25 Jun 2019, Dave Horsfall wrote: > Sorry for the noise, but this list is awfully quiet. OK, here's the story... Oh, Arthur, your SPF rejected me due to what is likely a YATS (Yet Another Telstra Screwup); I've since removed the record (it was just a place-holder anyway) but I forgot to update the TTL first. First, for the non-Aussies here, a simple glossary: NBN: National Broadband Network. Also known as No Brain Network, it's our national network infrastructure, powered (if that's the right word) by an ugly mix of technologies, viz: FTTP (which I have), FTTN (which most people have - sort of ADSL on steroids), FTTC (a new innovation), HFC (for those with cable), and for all I know tribal drums. Supposed to be FTTP to everyone, until the incoming conservative government decided that FTTP everywhere was far too good for the sheeple. Telstra: Technically Telstra BigPond, it's the biggest ISP in the country, one of the most expensive, and likely the worst, service-wise. Fondly known as T$, Helstra etc, I think the only reason I stick with them is Stockholm Syndrome (that, and a 2 year contract). Now, you cannot contact NBN directly; you have to go through your ISP (in my case T$) i.e. speak to the monkey, not the organ-grinder. OK... Full story for the morbidly interested available on request as a redacted version of my complaint to Telstra; hell, if I'm in an ugly enough mood I'll put it on my web page (it's an RTF file). My NBN service was cut off on 25th March, when NBN themselves not only sliced through my fibre cable at the adjacent pit but for good measure also disconnected it from the concentrator in the next pit; they are yet to explain just why, but they couldn't've done a better job of sabotaging my service if they'd tried. After much to-ing and fro-ing with T$'s helldesk (broken promises to turn up, return calls, etc), I finally had service restored on 20th May; that's 57 days (yes, I received a credit for lack of service, but not for the mobile (ObUS: cellular) calls that I had to make on a competing service). A nice new wireless router was supplied with automatic (I think) 4G backup (naturally with little documentation), but I realised that the port forwarding rules were the same as my old router (a Technicolor F at st 5355). Of course I got auto-unsubscribed from various mailing lists; the trick was to figure out which ones... FreeBSD-ports didn't notice my absence, and neither did Krebs on Security; I've since tracked down a few more and re-subscribed. Now, this is where TUHS comes in. Hmmm... Awfully quiet, yet it's supposed to be an active list. Am I still subscribed? Quickest way to find out is to post to it, and a well-run mailing list will return a snotty-gram along with an URL. Did I get one? Nope. Was it accepted? Apparently so... So, why isn't any traffic coming my way? And for the smart-arses (ObUS: smart-asses) out there who think I must be blocking it, nope, I watch my mail reject log like a hawk (I have all sorts of reporting scripts) and Minnie was definitely not there. Are you still here, Warren? I did email you, but no reply so far (and no reject either, otherwise I would've spotted *that* too). If anyone has a better way to determine list membership other than by posting to it then I'm all ears/eyes; there are just too many ways to subscribe these days (not everyone uses Mailman, and some are by invitation only). Happy, now? Hours spend on this sh1t so far, and hours yet to go... -- Dave From robpike at gmail.com Wed Jun 26 10:37:39 2019 From: robpike at gmail.com (Rob Pike) Date: Wed, 26 Jun 2019 10:37:39 +1000 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: References: <1561491205.19116.for-standards-violators@oclsc.org> Message-ID: Peter Weinberger started and Tom Killian finalized a version of /proc for the eighth edition that is ioctl-driven. It was done in the early 1980s. I don't know where the idea originated. In Plan 9, we (I?) replaced the ioctl interface, which was offensively non-portable. -rob On Wed, Jun 26, 2019 at 10:01 AM ron minnich wrote: > On Tue, Jun 25, 2019 at 12:34 PM Norman Wilson wrote: > > > > It's interesting that this comment about ptrace was written > > as early as 1980. > > > > Ron Minnich's reference to Plan 9 /proc misses the mark, though. > > your comment about my comment misses the mark; I was not talking about > the origins of /proc. This is probably because I was not clear and > probably because few people realize that the plan 9 process debugging > interface was strings written and read to/from /proc//[various > files], rather than something like ptrace. > > The first time I saw that debug-interface-in-proc in plan 9, it made > me think back to the 4.1c bsd manual ptrace comment, and I wondered if > there was any path that led from this man page entry to the ideas in > the plan 9 methods. > > I actually implemented the plan 9 debug model in linux back around > 2007, but was pretty sure getting it upstream would never happen, so > let it die. > > ron > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Wed Jun 26 10:40:55 2019 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 25 Jun 2019 20:40:55 -0400 Subject: [TUHS] 4.1c bsd ptrace man entry Message-ID: <201906260040.x5Q0etJF110839@tahoe.cs.Dartmouth.EDU> Ptrace was short-lived at Research, appearing in 6th through 8th editions. /proc was introduced in the 8th. Norman axed it in the 9th. Norman wrote: nobody wanted to update adb and sdb, which were big messes inside. So I did ... Once I'd done that and shipped the new adb and sdb binaries to all our machines, I removed the ptrace call from the kernel. doug From lm at mcvoy.com Wed Jun 26 10:46:03 2019 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 25 Jun 2019 17:46:03 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: References: <1561491205.19116.for-standards-violators@oclsc.org> Message-ID: <20190626004603.GG925@mcvoy.com> I'm curious what Rob and others think of the Linux /proc. It's string based and it seems like it is more like /whatever_you_might_want. The AT&T /proc that Faulkner worked on was much more narrow in scope, in keeping with the Unix tradition. The linux /proc was both a way to dig into kernel stuff and control kernel stuff, it was way broader. On Wed, Jun 26, 2019 at 10:37:39AM +1000, Rob Pike wrote: > Peter Weinberger started and Tom Killian finalized a version of /proc for > the eighth edition that is ioctl-driven. It was done in the early 1980s. I > don't know where the idea originated. > > In Plan 9, we (I?) replaced the ioctl interface, which was offensively > non-portable. > > -rob > > > On Wed, Jun 26, 2019 at 10:01 AM ron minnich wrote: > > > On Tue, Jun 25, 2019 at 12:34 PM Norman Wilson wrote: > > > > > > It's interesting that this comment about ptrace was written > > > as early as 1980. > > > > > > Ron Minnich's reference to Plan 9 /proc misses the mark, though. > > > > your comment about my comment misses the mark; I was not talking about > > the origins of /proc. This is probably because I was not clear and > > probably because few people realize that the plan 9 process debugging > > interface was strings written and read to/from /proc//[various > > files], rather than something like ptrace. > > > > The first time I saw that debug-interface-in-proc in plan 9, it made > > me think back to the 4.1c bsd manual ptrace comment, and I wondered if > > there was any path that led from this man page entry to the ideas in > > the plan 9 methods. > > > > I actually implemented the plan 9 debug model in linux back around > > 2007, but was pretty sure getting it upstream would never happen, so > > let it die. > > > > ron > > -- --- Larry McVoy lm at mcvoy.com http://www.mcvoy.com/lm From jsteve at superglobalmegacorp.com Wed Jun 26 10:53:32 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Wed, 26 Jun 2019 00:53:32 +0000 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: The only UX42 like thing I've found is a binary image on the Mach 3.0 disk set of the mt xinu disks.  In one of the Amiga directories of Mach stuff and the NS512 there is the BSDSS version 4 dump.  I haven't tried to build it yet.  Apparently CMU had up to version 8 of it, until the AT&T lawsuit where CMU apparently got cold feet and locked everything up. If google works right it seems that you'll just find me interested in the old stuff, the world has mostly passed this stuff on. I have been looking for RS/6000 Mach the better part of forever.  Absolutely zero luck. It's funny about MachTEN, I asked them about buying the source, and/or redistribution but all they have is apparently a mountain of version 4 CD-ROMs. The only exciting thing is getting the mt xinu binaries and being able to compile the CSRG dump of 2.5.  I found on bochs that it's doing something weird at the 3gb boundary which resulted in a triple fault and reboot. Everything is about doing elf debug, a.out is so out of vogue it's not even funny.  I'd always assumed that Mach 2.5 on i386 actually works.  Although the 3.0 stuff, at least on the Mt Xinu disks does.  Time to walk through start and locore...  Definitely way above my pay grade. Although it does look like there is some sequent machine with multiple 386 processors implying that it's SMP capable.  Which probably doesn't work, otherwise why would NeXT have been lacking in SMP for so long?  It'd have been awesome on the SUN hardware, and of course on i386.  Instead it didn't come until what? OS X 10.3? Im always saddened on how the most prolific platform was ignored back in the day it seems.  Sure the 80386 isnt sexy but they didn't have to cost as much as a luxury car. Have you tried emailing professors at mit, Utah or CMU?  Maybe they might take you up on it.  I had zero luck, but I don't have any 'in'.  I'm just some dropout that barely made it through high school, not exactly university material. Get Outlook for Android On Wed, Jun 26, 2019 at 2:26 AM +0800, "Chris Hanson" wrote: On Jun 24, 2019, at 8:07 PM, Gregg Levine wrote: > > Actually Chris, I found a complete collection of both CMU Mach and the > Flux Group Mach, and even MkMach at the FTP2 site for the French > OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach > directories. Thanks for this, but it’s just the stuff that was made publicly available by these groups. It’s useful, especially to have via FTP (easier to sync), but it doesn’t cover things like UX42 (the BSD atop Mach that CMU deployed to cluster workstations). -- Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From robpike at gmail.com Wed Jun 26 10:56:19 2019 From: robpike at gmail.com (Rob Pike) Date: Wed, 26 Jun 2019 10:56:19 +1000 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: <20190626004603.GG925@mcvoy.com> References: <1561491205.19116.for-standards-violators@oclsc.org> <20190626004603.GG925@mcvoy.com> Message-ID: I have no informed opinion on Linux's /proc. -rob On Wed, Jun 26, 2019 at 10:46 AM Larry McVoy wrote: > I'm curious what Rob and others think of the Linux /proc. It's string > based and it seems like it is more like /whatever_you_might_want. > > The AT&T /proc that Faulkner worked on was much more narrow in scope, > in keeping with the Unix tradition. The linux /proc was both a way > to dig into kernel stuff and control kernel stuff, it was way broader. > > On Wed, Jun 26, 2019 at 10:37:39AM +1000, Rob Pike wrote: > > Peter Weinberger started and Tom Killian finalized a version of /proc for > > the eighth edition that is ioctl-driven. It was done in the early 1980s. > I > > don't know where the idea originated. > > > > In Plan 9, we (I?) replaced the ioctl interface, which was offensively > > non-portable. > > > > -rob > > > > > > On Wed, Jun 26, 2019 at 10:01 AM ron minnich wrote: > > > > > On Tue, Jun 25, 2019 at 12:34 PM Norman Wilson > wrote: > > > > > > > > It's interesting that this comment about ptrace was written > > > > as early as 1980. > > > > > > > > Ron Minnich's reference to Plan 9 /proc misses the mark, though. > > > > > > your comment about my comment misses the mark; I was not talking about > > > the origins of /proc. This is probably because I was not clear and > > > probably because few people realize that the plan 9 process debugging > > > interface was strings written and read to/from /proc//[various > > > files], rather than something like ptrace. > > > > > > The first time I saw that debug-interface-in-proc in plan 9, it made > > > me think back to the 4.1c bsd manual ptrace comment, and I wondered if > > > there was any path that led from this man page entry to the ideas in > > > the plan 9 methods. > > > > > > I actually implemented the plan 9 debug model in linux back around > > > 2007, but was pretty sure getting it upstream would never happen, so > > > let it die. > > > > > > ron > > > > > -- > --- > Larry McVoy lm at mcvoy.com > http://www.mcvoy.com/lm > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rminnich at gmail.com Wed Jun 26 11:03:47 2019 From: rminnich at gmail.com (ron minnich) Date: Tue, 25 Jun 2019 18:03:47 -0700 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: <20190626004603.GG925@mcvoy.com> References: <1561491205.19116.for-standards-violators@oclsc.org> <20190626004603.GG925@mcvoy.com> Message-ID: On Tue, Jun 25, 2019 at 5:46 PM Larry McVoy wrote: > > I'm curious what Rob and others think of the Linux /proc. It's string > based and it seems like it is more like /whatever_you_might_want. it's very handy but quite difficult to work with programatically. The output is convenient for humans to parse, not very nice for programs to parse. /proc on linux has no real standard way of outputting things. You get tables, tuples, and lists and some stuff I can't classify (/proc/execdomains, /proc/devices); and, in some cases, some files give you more than one type of thing. Units are not clear for many tables. /proc on linux has far more than just process information, including stuff that has nothing to do with processes (51 things on my current linux, e.g. /proc/mounts). Things are in many cases not self-describing, though lots of /proc have this issue. I do recall (possibly wrongly) at some point in the 2000s there was an effort to stop putting stuff in /proc, but rather in /sys, but that seems to have not worked out. /proc is just too convenient a place, and by convention, lots of stuff lands there. While I was at LANL we did experiment with having /proc come out as s-expressions, which were nicely self describing, composable, easily parsed and operated on, and almost universally disliked b/c humans don't read s-expressions that easily. So that ended. We've been reimplementing Unix commands in Go for about 8 years now and dealing with all the variance in /proc on linux was a headache. You pretty much need a different function for every file in /proc. And all that said, it's handy, so hard to complain about too much. From jsteve at superglobalmegacorp.com Wed Jun 26 11:04:19 2019 From: jsteve at superglobalmegacorp.com (Jason Stevens) Date: Wed, 26 Jun 2019 01:04:19 +0000 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> Message-ID: After I got lites + 3.0 on netbsd running on qemu with networking to show how painfully slow it was, it felt like I was the last person on earth to care about this stuff. I think the gnumach folks had some interesting trip on rehosting Mach on oskit to get all those Linux 2.0 drivers but they never could get the push to x86_64.  I guess it's hard to compete in the Hurd cathedral vs the Linux bazaar... So to speak. Probably the more important stuff to archive and find is mailing lists although I don't know of anything surviving anywhere.  I guess it was deep underground as the internet lawyers were in fear about emailing patches and having AT&T show up at their door demanding new borns. Oh well such is life when you are chasing evelotionary dead ends.  It's not like a 4.3BSD os is going to set the world on fire, but then again I did save Quasijarious from the digital dumpster as well. Get Outlook for Android On Wed, Jun 26, 2019 at 4:24 AM +0800, "Gregg Levine" wrote: Hello! And oddly enough the Finnish site ftp://nic.funet.fi maintains a collection of Mach related items at ftp://nic.funet.fi/pub/doc/OS/Mach/ and and at ftp://nic.funet.fi/pub/mach/ and a place named LYX contains Mach at ftp://ftp.lyx.org/pub/mach/ Ideally it is just a duplicate of the first one from earlier. And then Google gets lost. It also includes several hits to Jason's work, but after that Google gets lost. ----- Gregg C Levine gregg.drwho8 at gmail.com "This signature fought the Time Wars, time and again." On Tue, Jun 25, 2019 at 2:18 PM Chris Hanson wrote: > > On Jun 24, 2019, at 8:07 PM, Gregg Levine wrote: > > > > Actually Chris, I found a complete collection of both CMU Mach and the > > Flux Group Mach, and even MkMach at the FTP2 site for the French > > OpenBSD location, ftp://ftp2.fr.openbsd.org under the pub and the mach > > directories. > > Thanks for this, but it’s just the stuff that was made publicly available by these groups. It’s useful, especially to have via FTP (easier to sync), but it doesn’t cover things like UX42 (the BSD atop Mach that CMU deployed to cluster workstations). > > -- Chris > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggm at algebras.org Wed Jun 26 11:12:49 2019 From: ggm at algebras.org (George Michaelson) Date: Wed, 26 Jun 2019 11:12:49 +1000 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: References: <1561491205.19116.for-standards-violators@oclsc.org> <20190626004603.GG925@mcvoy.com> Message-ID: The lack of consistency in what you can READ in /proc makes it hard to believe its useful in the "wide" -but I am sure specific things get benefit from it, as an abstraction which makes code simple because "its a file" if you're WRITING into things in /proc, I think you own the pain be it an ioctl() or anything else. I see occasional shell scripts about turning on and off meta-state for SCSI or SAS as "cat 0 > /dev/somedir/some-model-of-abstraction/some-disk" and while I applaud, I also wince. So easy to go wrong.. As a long-term user and non-developer, I'm sort of half a believer, half not. Maybe if it had emerged before the great Schism(s) it would be more normal? sane? understandable? -G On Wed, Jun 26, 2019 at 11:04 AM ron minnich wrote: > > On Tue, Jun 25, 2019 at 5:46 PM Larry McVoy wrote: > > > > I'm curious what Rob and others think of the Linux /proc. It's string > > based and it seems like it is more like /whatever_you_might_want. > > it's very handy but quite difficult to work with programatically. The > output is convenient for humans to parse, not very nice for programs > to parse. > > /proc on linux has no real standard way of outputting things. You get > tables, tuples, and lists and some stuff I can't classify > (/proc/execdomains, /proc/devices); and, in some cases, some files > give you more than one type of thing. Units are not clear for many > tables. > > /proc on linux has far more than just process information, including > stuff that has nothing to do with processes (51 things on my current > linux, e.g. /proc/mounts). > > Things are in many cases not self-describing, though lots of /proc > have this issue. > > I do recall (possibly wrongly) at some point in the 2000s there was an > effort to stop putting stuff in /proc, but rather in /sys, but that > seems to have not worked out. /proc is just too convenient a place, > and by convention, lots of stuff lands there. > > While I was at LANL we did experiment with having /proc come out as > s-expressions, which were nicely self describing, composable, easily > parsed and operated on, and almost universally disliked b/c humans > don't read s-expressions that easily. So that ended. > > We've been reimplementing Unix commands in Go for about 8 years now > and dealing with all the variance in /proc on linux was a headache. > You pretty much need a different function for every file in /proc. > > And all that said, it's handy, so hard to complain about too much. From noel.hunt at gmail.com Wed Jun 26 11:32:34 2019 From: noel.hunt at gmail.com (Noel Hunt) Date: Wed, 26 Jun 2019 11:32:34 +1000 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: References: <1561491205.19116.for-standards-violators@oclsc.org> <20190626004603.GG925@mcvoy.com> Message-ID: I thought there was a filesystem in Ninth Edition called '/tbl' wherein various system related items could be read. I have never seen it in operation but I'm sure I saw it in the kernel code; it seemed to fulfill all the functions of the non-process related information that Linux dumps into /proc. Am I perhaps mistaken about this? On Wed, Jun 26, 2019 at 11:13 AM George Michaelson wrote: > The lack of consistency in what you can READ in /proc makes it hard to > believe its useful in the "wide" -but I am sure specific things get > benefit from it, as an abstraction which makes code simple because > "its a file" > > if you're WRITING into things in /proc, I think you own the pain be it > an ioctl() or anything else. > > I see occasional shell scripts about turning on and off meta-state for > SCSI or SAS as "cat 0 > > /dev/somedir/some-model-of-abstraction/some-disk" and while I applaud, > I also wince. So easy to go wrong.. > > As a long-term user and non-developer, I'm sort of half a believer, > half not. Maybe if it had emerged before the great Schism(s) it would > be more normal? sane? understandable? > > -G > > On Wed, Jun 26, 2019 at 11:04 AM ron minnich wrote: > > > > On Tue, Jun 25, 2019 at 5:46 PM Larry McVoy wrote: > > > > > > I'm curious what Rob and others think of the Linux /proc. It's string > > > based and it seems like it is more like /whatever_you_might_want. > > > > it's very handy but quite difficult to work with programatically. The > > output is convenient for humans to parse, not very nice for programs > > to parse. > > > > /proc on linux has no real standard way of outputting things. You get > > tables, tuples, and lists and some stuff I can't classify > > (/proc/execdomains, /proc/devices); and, in some cases, some files > > give you more than one type of thing. Units are not clear for many > > tables. > > > > /proc on linux has far more than just process information, including > > stuff that has nothing to do with processes (51 things on my current > > linux, e.g. /proc/mounts). > > > > Things are in many cases not self-describing, though lots of /proc > > have this issue. > > > > I do recall (possibly wrongly) at some point in the 2000s there was an > > effort to stop putting stuff in /proc, but rather in /sys, but that > > seems to have not worked out. /proc is just too convenient a place, > > and by convention, lots of stuff lands there. > > > > While I was at LANL we did experiment with having /proc come out as > > s-expressions, which were nicely self describing, composable, easily > > parsed and operated on, and almost universally disliked b/c humans > > don't read s-expressions that easily. So that ended. > > > > We've been reimplementing Unix commands in Go for about 8 years now > > and dealing with all the variance in /proc on linux was a headache. > > You pretty much need a different function for every file in /proc. > > > > And all that said, it's handy, so hard to complain about too much. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From khm at sciops.net Wed Jun 26 12:45:03 2019 From: khm at sciops.net (Kurt H Maier) Date: Tue, 25 Jun 2019 19:45:03 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> Message-ID: <20190626024503.GA43970@wopr> On Mon, Jun 24, 2019 at 09:00:28PM -0400, Richard Salz wrote: > Is this really the kind of commentary appropriate for this list? I mean I'm > new here, but... It might not be, but it is definitely relevant to Unix. Arguably the drivers of Unix's development movement away from R&D-focused places and toward product-oriented entities had at least a little to do with Larry's topic of complaint. Product managers gained the ammunition to demand sustainable development practices, while R&D got a little leaner, a little more focused on demonstrating the thesis, a little less focused on who might need to run this code five years on... I'd say there's a thesis to be written about the economics of support contracts vs the economics of proving concepts, but since I'm not volunteering to write it, I'll shut up about it on TUHS. Suffice to say the sociology surrounding the evolution of Unix is a topic I find fascinating, even if it's not strictly technical. khm From peter at rulingia.com Wed Jun 26 12:28:24 2019 From: peter at rulingia.com (Peter Jeremy) Date: Wed, 26 Jun 2019 12:28:24 +1000 Subject: [TUHS] Paper discussing Unix boot process? In-Reply-To: <7b575d14-270c-1d3a-7419-0329ffb42669@esse.ch> References: <14453.1554920068@cesium.clock.org> <57C2E8D6-148C-487E-A6AE-B6E0E6EC337C@bitblocks.com> <7b575d14-270c-1d3a-7419-0329ffb42669@esse.ch> Message-ID: <20190626022824.GA86961@server.rulingia.com> [Resurrecting an old thread to provide some input from Dave Horsfall] On 2019-Apr-11 06:52:08 +0200, Fabio Scotoni wrote: >On 4/11/19 1:19 AM, Bakul Shah wrote: >> On Apr 10, 2019, at 3:24 PM, Clem Cole wrote: >>> >>> [...] is the Lions book including PS and PDF and in the original troff thankfully. >> >> May be someone will be inspired enough to convert this to troff? ... >Thus, the first step would be to reverse engineer the troff macros used >to typeset the book. >Then the TeX sources would need to be converted to those troff macros; >this can possibly be automated entirely. >Then the matching version of troff would need to be used to typeset it >(likely via apout and V6 or V7 troff). >Finally, the C/A/T typesetter output would need to be converted to >PostScript or PDF (either Adobe's psroff or Chris Lewis's psroff from >comp.unix.sources can likely help with that; I got Lewis's psroff to >work a while ago, but it's pretty brittle). On 2019-Jun-26 11:34:31 +1000, Dave Horsfall wrote: >'Twas NROFF on the CSU's LA120 (I should know; I ran the Unix section), >with draft versions on a Duckwriter which I helped proof-read. Don't know >whether custom macros were used; quite likely, as he was that sort of >bloke. After all, he was a Comp Sci lecturer (one of mine!) and if you >find yourself writing the same lines over and over again... > >Going by that snippet of the thread (too much to follow, as I'm still >figuring out from which lists I've been bounced) it would be a heroic >effort to reverse-engineer it, and quite likely not worth the trouble. > >The original source would've been at Elec Eng, but long gone by now. > >As for TROFF, well, I'm not aware that UNSW has a C/A/T :-) > >Oh, the LA120 had a single-use nylon ribbon, I think, not fabric, hence >the somewhat high quality (I no longer have my Lions books to check; lost >after several house moves). -- Peter Jeremy -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From wkt at tuhs.org Wed Jun 26 12:50:23 2019 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 26 Jun 2019 12:50:23 +1000 Subject: [TUHS] Testing Mail Connectivity In-Reply-To: References: Message-ID: <20190626025023.GA17457@minnie.tuhs.org> On Wed, Jun 26, 2019 at 10:26:31AM +1000, Dave Horsfall wrote: > Are you still here, Warren? I did email you, but no reply so far (and no > reject either, otherwise I would've spotted *that* too). All, sorry for the off-topic reply and query. I'm posting this here so that Dave can read it in the web archive. I'm guessing the problem is the one Dave identified earlier: > You need to grok Sendmail's log format (and SMTP in general), but it means > that Minnie connected to my server, waited the requisite time for the greeting > banner, and then shat herself when she saw my ginormous banner and dropped the > connection without so much as a good-bye... i.e.a multiline response to the HELO/EHLO which meets the RFC specs. > That simple measure, along with the greeting pause and some simple RFC DNS > checks, block a lot of the crap. I've just subscribed to the Postfix mailing list, and I'll ask on there if there is a configuration change I can make to allow my Postfix client to deal properly with Dave's multiline HELO/EHLO response. And why it can't at present. But if someone here already knows, could they let me know?! Thanks! Warren From lm at mcvoy.com Wed Jun 26 12:56:46 2019 From: lm at mcvoy.com (Larry McVoy) Date: Tue, 25 Jun 2019 19:56:46 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626024503.GA43970@wopr> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> Message-ID: <20190626025646.GR925@mcvoy.com> On Tue, Jun 25, 2019 at 07:45:03PM -0700, Kurt H Maier wrote: > On Mon, Jun 24, 2019 at 09:00:28PM -0400, Richard Salz wrote: > > Is this really the kind of commentary appropriate for this list? I mean I'm > > new here, but... > > It might not be, but it is definitely relevant to Unix. Arguably the > drivers of Unix's development movement away from R&D-focused places and > toward product-oriented entities had at least a little to do with > Larry's topic of complaint. Product managers gained the ammunition to > demand sustainable development practices, while R&D got a little leaner, > a little more focused on demonstrating the thesis, a little less focused > on who might need to run this code five years on... In the good old days at Sun, we were very focussed on who would run this code for decades to come. I think the engineers at Sun were very focussed on helping people, the reason we were there was because the work we did helped people. The leverage was how much work we could do versus how much that helped people. That is product oriented. I think the reason that any engineer works is because they feel like their work helps someone. As an engineer, I wanted to go to the place and do the work that had the best chance of helping someone. All of Sun, when I was there, was like that. We were there to help. Yeah, of course, we wanted to make money, but all of us wanted to help. It's the dream, you do work, your work helps. From wkt at tuhs.org Wed Jun 26 13:02:24 2019 From: wkt at tuhs.org (Warren Toomey) Date: Wed, 26 Jun 2019 13:02:24 +1000 Subject: [TUHS] Software Tools Users Group Archive Message-ID: <20190626030221.GA23749@minnie.tuhs.org> All, a while back Debbie Scherrer mailed me a copy of a "Software Tools Users Group" archive, and I've been sitting on my hands and forgetting to add it to the Unix Archive. It's now here: https://www.tuhs.org/Archive/Applications/Software_Tools/STUG_Archive/ The mirrors should pick it up soon. I've gzipped most of it as I'm getting a bit tight on space. Thanks to Debbie for the copy and to her and Clem for reminding me to pull my finger out :) Cheers, Warren From gtaylor at tnetconsulting.net Wed Jun 26 14:07:25 2019 From: gtaylor at tnetconsulting.net (Grant Taylor) Date: Tue, 25 Jun 2019 22:07:25 -0600 Subject: [TUHS] Testing Mail Connectivity In-Reply-To: <20190626025023.GA17457@minnie.tuhs.org> References: <20190626025023.GA17457@minnie.tuhs.org> Message-ID: <4fa4c2ef-93ee-a810-227e-a76d4dd9af9d@spamtrap.tnetconsulting.net> On 6/25/19 8:50 PM, Warren Toomey wrote: >> You need to grok Sendmail's log format (and SMTP in general), but it means >> that Minnie connected to my server, waited the requisite time for the greeting >> banner, and then shat herself when she saw my ginormous banner and dropped the >> connection without so much as a good-bye... > > i.e.a multiline response to the HELO/EHLO which meets the RFC specs. > >> That simple measure, along with the greeting pause and some simple RFC DNS >> checks, block a lot of the crap. The greet pause seems longer than I'd choose, but I think it's less than 60 seconds. Any reasonable mail server should handle that. So that shouldn't be an issue. > I've just subscribed to the Postfix mailing list, and I'll ask on there > if there is a configuration change I can make to allow my Postfix client > to deal properly with Dave's multiline HELO/EHLO response. And why it can't > at present. I'd be surprised if Minnie had a problem with the multi-line greeting. I think Postfix (if that's what Minnie is running) has been dealing with multi-line greetings for a while. There's also the fact that—I suspect—Minnie has been successfully delivering THUS to Dave's server for a while. So unless Dave's multi-line greeting is new.... > But if someone here already knows, could they let me know?! Do you have errors in your log? Dave, Warren, feel free to reply to me (both of us) directly if you want to discuss this further off list. -- Grant. . . . unix || die -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4008 bytes Desc: S/MIME Cryptographic Signature URL: From dave at horsfall.org Wed Jun 26 14:42:51 2019 From: dave at horsfall.org (Dave Horsfall) Date: Wed, 26 Jun 2019 14:42:51 +1000 (EST) Subject: [TUHS] The Great NBN Out(r)age Message-ID: A few bods have asked to see this, so... Actually, "extracted" would be better description than "redacted", but it's too late now; I could rename it and put in a CGI-redirect, but I'm too busy at the moment. ----- A redacted copy of my complaint to T$. www.horsfall.org/Telstra-comp-redact.rtf (yes, RTF; it was written on a Mac). Utterly inexcusable... Please share etc :-) -- Dave From bakul at bitblocks.com Wed Jun 26 17:57:49 2019 From: bakul at bitblocks.com (Bakul Shah) Date: Wed, 26 Jun 2019 00:57:49 -0700 Subject: [TUHS] Paper discussing Unix boot process? In-Reply-To: <20190626022824.GA86961@server.rulingia.com> References: <14453.1554920068@cesium.clock.org> <57C2E8D6-148C-487E-A6AE-B6E0E6EC337C@bitblocks.com> <7b575d14-270c-1d3a-7419-0329ffb42669@esse.ch> <20190626022824.GA86961@server.rulingia.com> Message-ID: On Jun 25, 2019, at 7:28 PM, Peter Jeremy wrote: > > [Resurrecting an old thread to provide some input from Dave Horsfall] > On 2019-Apr-11 06:52:08 +0200, Fabio Scotoni wrote: >> On 4/11/19 1:19 AM, Bakul Shah wrote: >>> On Apr 10, 2019, at 3:24 PM, Clem Cole wrote: >>>> >>>> [...] is the Lions book including PS and PDF and in the original troff thankfully. >>> >>> May be someone will be inspired enough to convert this to troff? Er... I wasn't entirely serious but if I were doing this, I'd start with detexing the source and then manually adding in -ms macros. The detexed source is about 14 lines and surprisingly readable. Almost. This should be a piece of cake for one of you nroff wizards! > ... >> Thus, the first step would be to reverse engineer the troff macros used >> to typeset the book. >> Then the TeX sources would need to be converted to those troff macros; >> this can possibly be automated entirely. >> Then the matching version of troff would need to be used to typeset it >> (likely via apout and V6 or V7 troff). >> Finally, the C/A/T typesetter output would need to be converted to >> PostScript or PDF (either Adobe's psroff or Chris Lewis's psroff from >> comp.unix.sources can likely help with that; I got Lewis's psroff to >> work a while ago, but it's pretty brittle). > > On 2019-Jun-26 11:34:31 +1000, Dave Horsfall wrote: >> 'Twas NROFF on the CSU's LA120 (I should know; I ran the Unix section), >> with draft versions on a Duckwriter which I helped proof-read. Don't know >> whether custom macros were used; quite likely, as he was that sort of >> bloke. After all, he was a Comp Sci lecturer (one of mine!) and if you >> find yourself writing the same lines over and over again... >> >> Going by that snippet of the thread (too much to follow, as I'm still >> figuring out from which lists I've been bounced) it would be a heroic >> effort to reverse-engineer it, and quite likely not worth the trouble. >> >> The original source would've been at Elec Eng, but long gone by now. >> >> As for TROFF, well, I'm not aware that UNSW has a C/A/T :-) >> >> Oh, the LA120 had a single-use nylon ribbon, I think, not fabric, hence >> the somewhat high quality (I no longer have my Lions books to check; lost >> after several house moves). > > -- > Peter Jeremy From tytso at mit.edu Thu Jun 27 01:11:43 2019 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 26 Jun 2019 11:11:43 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626025646.GR925@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> Message-ID: <20190626151143.GC3116@mit.edu> On Tue, Jun 25, 2019 at 07:56:46PM -0700, Larry McVoy wrote: > > It might not be, but it is definitely relevant to Unix. Arguably the > > drivers of Unix's development movement away from R&D-focused places and > > toward product-oriented entities had at least a little to do with > > Larry's topic of complaint. Product managers gained the ammunition to > > demand sustainable development practices, while R&D got a little leaner, > > a little more focused on demonstrating the thesis, a little less focused > > on who might need to run this code five years on... > > In the good old days at Sun, we were very focussed on who would run > this code for decades to come. I think the engineers at Sun were very > focussed on helping people, the reason we were there was because the > work we did helped people. The leverage was how much work we could > do versus how much that helped people. That is product oriented. > > I think the reason that any engineer works is because they feel like > their work helps someone. As an engineer, I wanted to go to the place > and do the work that had the best chance of helping someone. All of > Sun, when I was there, was like that. We were there to help. Yeah, > of course, we wanted to make money, but all of us wanted to help. > It's the dream, you do work, your work helps. Motivations and incentives are a very big and important aspect which is often overlooked in large scale projects. For example, one of the really big problems with device drivers in the embedded space is that the team that works on SOC version X gets disbanded, and immediately reassigned to SOC verison X+1, sometimes before product has even shipped. Having one device driver that works for SOC versions N, N+1, N+2, ... N+5, is really important from a maintainability and being able to send out bug fixes for security flaws. However, it means that whenever you make changes, you need to test on N different older versions. And between the need to release product quickly, and the fact that engineers are !@#@! expensive, and the teams constantly getting formed and reformed, it's much easier to do code reuse by copying, and so you have N different versions of a device driver in a Board Support Package version of the Linux kernel shipping by a SOC vendor. Unfortunately, I have to disagree with Larry, there are many, many engineers who works because they get a paycheck, and so they go home at 5pm. Some people might be free to improve their code on their own time, or late at night, but corporation also preach "work/life balance" --- and then don't fund time for making code long-term maintainable or reducing tech debt. Open source helps because embarassment can be a great motivator, but more important are the fact that there are people who are empowered to say "no" who don't work for the corporation who is trying to cut corners, and who have a higher allegiance to the codebase than their employer. There is a similar related issue around publishing papers to document great ideas. This takes time away from product development, and it used to be that Sun was really prolific at documenting their technical innovations at conferences like Usenix. Over time, the academic traditions started dying off, and managers who came from that tradition moved on, retired, or got promoted beyond the point where they could encourage engineers to do that work. And it wasn't just at Sun; I was working at IBM when IBM decided to take away the (de minimus) bonus for publishing papers at conferences. But at the Usenix board, I remember looking at a chart of the declining number of ATC papers coming from industry over time. And it was very depressing... And the key for all of this is motivation and incentives, as any good historian will tell you. This is true whether probing the start of wars, or the decline of a technical community or tradition. - Ted From tytso at mit.edu Thu Jun 27 01:41:44 2019 From: tytso at mit.edu (Theodore Ts'o) Date: Wed, 26 Jun 2019 11:41:44 -0400 Subject: [TUHS] 4.1c bsd ptrace man entry ("ptrace is unique and arcane") In-Reply-To: References: <1561491205.19116.for-standards-violators@oclsc.org> <20190626004603.GG925@mcvoy.com> Message-ID: <20190626154144.GD3116@mit.edu> On Tue, Jun 25, 2019 at 06:03:47PM -0700, ron minnich wrote: > > I do recall (possibly wrongly) at some point in the 2000s there was an > effort to stop putting stuff in /proc, but rather in /sys, but that > seems to have not worked out. /proc is just too convenient a place, > and by convention, lots of stuff lands there. When looking at linux's /proc, there are three broad categories of stuff: * Traditional process-specific files, in /proc//... * System configuration parameters, aka "sysctls", which are in /proc/sys/... * Other miscellaneous ad-hoc files It's the last category where there has been a big push to only add new files in /sys (aka sysfs), and in general the vast majority of new files have been going into sysfs, and not into /proc. However, for backwards compatibility reasons, all of the old ad-hoc /proc files have stuck around. The files in /sys and /proc/sys tend to be very discplined, in that it's one value per file. That's both a good and bad thing. We don't have a general, efficient, way of supporting files that return a variable list of fields, especially if there is no obvious key. (e.g., like /proc/mounts). And it's certainly the case that looking at, say, /proc/scsi/scsi is much more conveient that iterating over /sys/bus/scsi/devices and grabbing a huge number of tiny files to get the same information. This last is the usual reason where there is temptation by some developers to add a new file in /proc, as opposed to adding several dozen files (per device/process/network connection) in /sys and then needing to promulgate a perl/python/go program to actually get a user friendly status report out. - Ted From lm at mcvoy.com Thu Jun 27 03:44:31 2019 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 26 Jun 2019 10:44:31 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626151143.GC3116@mit.edu> References: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> Message-ID: <20190626174431.GT925@mcvoy.com> On Wed, Jun 26, 2019 at 11:11:43AM -0400, Theodore Ts'o wrote: > Unfortunately, I have to disagree with Larry, there are many, many > engineers who works because they get a paycheck, and so they go home > at 5pm. Some people might be free to improve their code on their own > time, or late at night, but corporation also preach "work/life > balance" --- and then don't fund time for making code long-term > maintainable or reducing tech debt. Yeah, I was talking about 25-30 years ago. And even then there were people who were there for the paycheck. But the people I considered my peers were people who cared deeply about doing work well. The motivation was that we were at Sun, everyone wanted a Sun workstation, which made it all the more important that we did stuff right. If you need any proof, look no further than me. I was the guy who was so happy to be at Sun, I walked around for 3 years saying "I'd do this job for free if I had enough money" :) I think that feeling still exists but it is much harder to find these days, systems work seems to have dried up, kids think a server is a VM, it's a strange world. > There is a similar related issue around publishing papers to document > great ideas. This takes time away from product development, and it > used to be that Sun was really prolific at documenting their technical > innovations at conferences like Usenix. Over time, the academic > traditions started dying off, and managers who came from that > tradition moved on, retired, or got promoted beyond the point where > they could encourage engineers to do that work. And it wasn't just at > Sun; I was working at IBM when IBM decided to take away the (de > minimus) bonus for publishing papers at conferences. Huh, I didn't know IBM gave bonuses for papers, Sun never did. I don't remember, but they may have paid for us to go to a conference. > But at the > Usenix board, I remember looking at a chart of the declining number of > ATC papers coming from industry over time. And it was very depressing... Tell me about it. Systems work just isn't what it once was. From arnold at skeeve.com Thu Jun 27 04:01:16 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 26 Jun 2019 12:01:16 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626174431.GT925@mcvoy.com> References: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> Message-ID: <201906261801.x5QI1Ghs028659@freefriends.org> This is getting a little off topic ... There are still lots of motivated people. I found for myself that working on security products is very motivating; you're developing something that people really NEED, that protects them and their assets, and it's IMPORTANT, not just adding another shiny gadget onto Word or whatever. So, do I care about code quality? Very much. My workplace does too. That said, work/life balance is a real issue. I work to feed my family and keep a roof over their heads. Of course I want to enjoy my work and feel value from it, but out of necessity I've spent a lot of time where it wasn't so good. I'm fortunate that today it's good on both ends. :-) Arnold From aek at bitsavers.org Thu Jun 27 04:08:01 2019 From: aek at bitsavers.org (Al Kossow) Date: Wed, 26 Jun 2019 11:08:01 -0700 Subject: [TUHS] Software Tools Users Group Archive In-Reply-To: <20190626030221.GA23749@minnie.tuhs.org> References: <20190626030221.GA23749@minnie.tuhs.org> Message-ID: I wonder how the Georgia Tech tape recovery effort went. Didn't notice that the first messages about it were in April On 6/25/19 8:02 PM, Warren Toomey wrote: > All, a while back Debbie Scherrer mailed me a copy of a > "Software Tools Users Group" archive, and I've been sitting on my > hands and forgetting to add it to the Unix Archive. It's now here: > > https://www.tuhs.org/Archive/Applications/Software_Tools/STUG_Archive/ > > The mirrors should pick it up soon. I've gzipped most of it as I'm getting > a bit tight on space. > > Thanks to Debbie for the copy and to her and Clem for reminding me to > pull my finger out :) > > Cheers, Warren > From arnold at skeeve.com Thu Jun 27 04:14:18 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Wed, 26 Jun 2019 12:14:18 -0600 Subject: [TUHS] Software Tools Users Group Archive In-Reply-To: References: <20190626030221.GA23749@minnie.tuhs.org> Message-ID: <201906261814.x5QIEIxg030241@freefriends.org> Al Kossow wrote: > I wonder how the Georgia Tech tape recovery effort went. > Didn't notice that the first messages about it were in April It hasn't really started yet. The person who found the tapes is moving a little slowly, still trying to choose someone to do the work. When there's something to report, I will do so. I did find Scott Lee's program to dump Pr1me MAGSAV format tapes onto a Unix system. It's now available at https://github.com/arnoldrobbins/pdump if anyone wants it. Arnold From imp at bsdimp.com Thu Jun 27 04:18:36 2019 From: imp at bsdimp.com (Warner Losh) Date: Wed, 26 Jun 2019 12:18:36 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <201906261801.x5QI1Ghs028659@freefriends.org> References: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <201906261801.x5QI1Ghs028659@freefriends.org> Message-ID: On Wed, Jun 26, 2019 at 12:01 PM wrote: > This is getting a little off topic ... > > There are still lots of motivated people. I found for myself that > working on security products is very motivating; you're developing > something > that people really NEED, that protects them and their assets, and > it's IMPORTANT, not just adding another shiny gadget onto Word or > whatever. > > So, do I care about code quality? Very much. My workplace does too. > > That said, work/life balance is a real issue. I work to feed my family > and keep a roof over their heads. Of course I want to enjoy my work > and feel value from it, but out of necessity I've spent a lot of time > where it wasn't so good. I'm fortunate that today it's good on both > ends. :-) > For me, work life balance dictates WHEN I do the work, not the work that I do. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmhanson at eschatologist.net Thu Jun 27 05:22:42 2019 From: cmhanson at eschatologist.net (Chris Hanson) Date: Wed, 26 Jun 2019 12:22:42 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626174431.GT925@mcvoy.com> References: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> Message-ID: <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> One thing to remember about Mach is that it really was a *research* project. Some of the things that have been complained about, e.g. “pointless” or “needless” abstraction and layering, were done specifically to examine the effects of having those layers of abstraction. Does their presence enable different approaches to problems? Do they enable new features altogether? What’s given up by having them? And so on. Just as an example, a lot of the complexity in the Mach VM system comes from the idea that it could provide a substrate for all sorts of different types of systems, and it could have all sorts of different mechanisms underneath supporting it. This means that Mach’s creators got to do things like try dedicated network virtual memory, purpose-specific pagers, compressing pagers, etc. You may not need as much flexibility in a non-research system. For another example, Mach did a lot of extra work around things like processor sets that wouldn’t be needed on (say) a dual-CPU shared-cache uniform-memory systems, but turns out to be important when dealing with things like systems with a hierarchy of CPUs, caches, and memories. Did they know about all the possible needs for that before they started? Having met some of them, the people who created and worked on Mach were passionate about exploring the space of operating system architecture and worked to create a system that would be a good vehicle for that. That wasn’t their only goal—they were also part of the group creating what was at the time CMU’s next-generation academic computing environment—but the sum of their goals generally led to a very pragmatic approach to making things possible to try while also shipping. -- Chris From athornton at gmail.com Thu Jun 27 05:25:23 2019 From: athornton at gmail.com (Adam Thornton) Date: Wed, 26 Jun 2019 12:25:23 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626151143.GC3116@mit.edu> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> Message-ID: "And the key for all of this is motivation and incentives, as any good historian will tell you. This is true whether probing the start of wars, or the decline of a technical community or tradition." This. I work for the Large Synoptic Survey Telescope. I'm in the Data Management group, and specifically in the Science Quality and Reliability Engineering team. The 50,000 foot view of what we do is try to bring software engineering to astronomical software. In general, the thing about scientific software is that, to put it crudely, no one gets a Nobel Prize for software. There's a very strong incentive to write a thing that will solve whatever particular problem you need solved for your paper, and no more. There's also the (highly correlated) problem that, to an established researcher, graduate student labor is free, and your graduate students want to finish their thesis, not engineer quality software. Whereas what I'd like to do is factor the common infrastructure--of which there is a lot--out of the various teetering stacks of special-purpose software and create some sane and maintainable infrastructure that individual researchers can easily and relatively gracefully extend to answer their specific questions. Adam On Wed, Jun 26, 2019 at 8:12 AM Theodore Ts'o wrote: > On Tue, Jun 25, 2019 at 07:56:46PM -0700, Larry McVoy wrote: > > > It might not be, but it is definitely relevant to Unix. Arguably the > > > drivers of Unix's development movement away from R&D-focused places and > > > toward product-oriented entities had at least a little to do with > > > Larry's topic of complaint. Product managers gained the ammunition to > > > demand sustainable development practices, while R&D got a little > leaner, > > > a little more focused on demonstrating the thesis, a little less > focused > > > on who might need to run this code five years on... > > > > In the good old days at Sun, we were very focussed on who would run > > this code for decades to come. I think the engineers at Sun were very > > focussed on helping people, the reason we were there was because the > > work we did helped people. The leverage was how much work we could > > do versus how much that helped people. That is product oriented. > > > > I think the reason that any engineer works is because they feel like > > their work helps someone. As an engineer, I wanted to go to the place > > and do the work that had the best chance of helping someone. All of > > Sun, when I was there, was like that. We were there to help. Yeah, > > of course, we wanted to make money, but all of us wanted to help. > > It's the dream, you do work, your work helps. > > Motivations and incentives are a very big and important aspect which > is often overlooked in large scale projects. > > For example, one of the really big problems with device drivers in the > embedded space is that the team that works on SOC version X gets > disbanded, and immediately reassigned to SOC verison X+1, sometimes > before product has even shipped. Having one device driver that works > for SOC versions N, N+1, N+2, ... N+5, is really important from a > maintainability and being able to send out bug fixes for security > flaws. However, it means that whenever you make changes, you need to > test on N different older versions. And between the need to release > product quickly, and the fact that engineers are !@#@! expensive, and > the teams constantly getting formed and reformed, it's much easier to > do code reuse by copying, and so you have N different versions of a > device driver in a Board Support Package version of the Linux kernel > shipping by a SOC vendor. > > Unfortunately, I have to disagree with Larry, there are many, many > engineers who works because they get a paycheck, and so they go home > at 5pm. Some people might be free to improve their code on their own > time, or late at night, but corporation also preach "work/life > balance" --- and then don't fund time for making code long-term > maintainable or reducing tech debt. > > Open source helps because embarassment can be a great motivator, but > more important are the fact that there are people who are empowered to > say "no" who don't work for the corporation who is trying to cut > corners, and who have a higher allegiance to the codebase than their > employer. > > There is a similar related issue around publishing papers to document > great ideas. This takes time away from product development, and it > used to be that Sun was really prolific at documenting their technical > innovations at conferences like Usenix. Over time, the academic > traditions started dying off, and managers who came from that > tradition moved on, retired, or got promoted beyond the point where > they could encourage engineers to do that work. And it wasn't just at > Sun; I was working at IBM when IBM decided to take away the (de > minimus) bonus for publishing papers at conferences. But at the > Usenix board, I remember looking at a chart of the declining number of > ATC papers coming from industry over time. And it was very depressing... > > And the key for all of this is motivation and incentives, as any good > historian will tell you. This is true whether probing the start of > wars, or the decline of a technical community or tradition. > > - Ted > -------------- next part -------------- An HTML attachment was scrubbed... URL: From drb at msu.edu Thu Jun 27 05:30:05 2019 From: drb at msu.edu (Dennis Boone) Date: Wed, 26 Jun 2019 15:30:05 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: (Your message of Wed, 26 Jun 2019 12:22:42 -0700.) <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> References: <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> Message-ID: <20190626193015.DBB731D33AD@yagi.h-net.msu.edu> > For another example, Mach did a lot of extra work around things like > processor sets that wouldn’t be needed on (say) a dual-CPU > shared-cache uniform-memory systems, but turns out to be important > when dealing with things like systems with a hierarchy of CPUs, > caches, and memories. Did they know about all the possible needs for > that before they started? For example, our campus had one of these, with 96 processors if I recall correctly. Mach-based OS. https://en.wikipedia.org/wiki/BBN_Butterfly De From ben at cogs.com Thu Jun 27 05:32:32 2019 From: ben at cogs.com (Ben Greenfield) Date: Wed, 26 Jun 2019 15:32:32 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> References: <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> Message-ID: <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> > On Jun 26, 2019, at 3:22 PM, Chris Hanson wrote: > > One thing to remember about Mach is that it really was a *research* project. Some of the things that have been complained about, e.g. “pointless” or “needless” abstraction and layering, were done specifically to examine the effects of having those layers of abstraction. Does their presence enable different approaches to problems? I’m surprised the study of Mach needs any justification. Mach certainly happened and is certainly enjoys a large and growing installed base. I’m bothered that some feel the need to belittle the interests of others. I would be more impressed if those criticizing weren’t so hand-wavy and had more specific points…. > Do they enable new features altogether? What’s given up by having them? And so on. > > Just as an example, a lot of the complexity in the Mach VM system comes from the idea that it could provide a substrate for all sorts of different types of systems, and it could have all sorts of different mechanisms underneath supporting it. This means that Mach’s creators got to do things like try dedicated network virtual memory, purpose-specific pagers, compressing pagers, etc. You may not need as much flexibility in a non-research system. > > For another example, Mach did a lot of extra work around things like processor sets that wouldn’t be needed on (say) a dual-CPU shared-cache uniform-memory systems, but turns out to be important when dealing with things like systems with a hierarchy of CPUs, caches, and memories. Did they know about all the possible needs for that before they started? > > Having met some of them, the people who created and worked on Mach were passionate about exploring the space of operating system architecture and worked to create a system that would be a good vehicle for that. That wasn’t their only goal—they were also part of the group creating what was at the time CMU’s next-generation academic computing environment—but the sum of their goals generally led to a very pragmatic approach to making things possible to try while also shipping. > > -- Chris > From lm at mcvoy.com Thu Jun 27 06:21:25 2019 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 26 Jun 2019 13:21:25 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> Message-ID: <20190626202125.GA1206@mcvoy.com> On Wed, Jun 26, 2019 at 03:32:32PM -0400, Ben Greenfield via TUHS wrote: > I would be more impressed if those criticizing weren???t so hand-wavy and had more specific points???. OK, I'll bite. Go read the source in the FreeBSD tree, which has been reduced in size by 60% according to someone on the team. Then come back and draw me a picture of what it does. I get that what I'm asking is non-trivial, I've tried to do it and failed. As a pretty green guy it took me months to do that for SunOS but I could feel myself getting closer. I never got that feeling in the Mach code, I kept getting lost in code that made me say "why is this here?" What I've been trying to say all along is getting something to work is different from making something that both works and is clean. When something is clean it is like a well written paper, it is actively trying to help you understand the information. I value clean code, and I'm not a fan of people excusing messy code because researchers did it. As someone said to me in private, those same researchers are expected to write clear and understandable papers, why is code any different? I also agree with whoever said the Mach guys were trying out all sorts of different ideas, that's cool. What's not cool is that when those ideas didn't pan out they left in all the substrate that had proven to be not needed. And I'll freely admit Mach left a sour taste in my mouth. I read all the papers and those lead me to believe that the code would be on par with the SunOS code. When I finally got to read it I felt like a kid who was promised nice things only to have them taken away. From bakul at bitblocks.com Thu Jun 27 09:19:19 2019 From: bakul at bitblocks.com (Bakul Shah) Date: Wed, 26 Jun 2019 16:19:19 -0700 Subject: [TUHS] Craft vs Research (Re: CMU Mach sources? In-Reply-To: Your message of "Mon, 24 Jun 2019 21:18:06 -0700." <20190625041806.GL7655@mcvoy.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190625005528.GA11929@wopr> <20190625041806.GL7655@mcvoy.com> Message-ID: <20190626231926.EF38A156E40C@mail.bitblocks.com> On Mon, 24 Jun 2019 21:18:06 -0700 Larry McVoy wrote: > > It's not about money. It's about caring about your craft. I cared, > the people I have worked with in industry cared, if they didn't I > left. > > The point I was trying to make was that you can be a student and still > be a pro. Or not. The pros care about their craft. The Mach people, > in my you-get-what-you-paid-for opinion, were not pros. They got a > lot done in a sloppy way and they left a mess. > > I don't know how to say it more clearly, there are plenty examples of > students that wrote clean code. Mach was cool, clean code it was not. I beg to differ with Larry. Research is basically directed exploration. You may have a vague idea about what you're seeking or you may decide to pursue something you stumbled upon. But you are mainly hacking a path through the jungle as it were. In my view it is much too early to build permanent roads (i.e. write "production quality code") during exploration. And if you spend time building roads, you are likely going to slow down or are already stuck and simply using road building to procrastinate! Craft certainly counts but it is not all important. You should just build *what you absolutely need* and do so as simply as possible and keep moving. In fact, the more permanent structures you build, the more afraid you will be to throw away bad bits and pieces if you have to change direction! It doesn't make sense to expect such exploratory code to work well in production. It is not going to be rock solid, it won't take care of corner cases, it will have lousy error recovery, if any, it may not have some necessary features and it may not scale well. From cmhanson at eschatologist.net Thu Jun 27 10:22:05 2019 From: cmhanson at eschatologist.net (Chris Hanson) Date: Wed, 26 Jun 2019 17:22:05 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626202125.GA1206@mcvoy.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> Message-ID: <200E87FD-9B9C-4AEC-B3E1-95C9C84068A4@eschatologist.net> On Jun 26, 2019, at 1:21 PM, Larry McVoy wrote: > > I also agree with whoever said the Mach guys were trying out all sorts > of different ideas, that's cool. What's not cool is that when those > ideas didn't pan out they left in all the substrate that had proven to > be not needed. It seems like you’re still missing the point. All the different ideas weren’t implemented by “the Mach [people] trying all sorts of different ideas” in-tree, they were implemented by a variety of researchers *atop* the large set of abstractions and layers Mach provided. All the substrate *wasn’t* proven to be not needed, if anything it was proven to be very useful in performing OS research experiments without having to do a lot of work on the substrate itself. I also read a lot of the Mach code very early in my career and found it pretty comprehensible. -- Chris From lm at mcvoy.com Thu Jun 27 11:02:25 2019 From: lm at mcvoy.com (Larry McVoy) Date: Wed, 26 Jun 2019 18:02:25 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <200E87FD-9B9C-4AEC-B3E1-95C9C84068A4@eschatologist.net> References: <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <200E87FD-9B9C-4AEC-B3E1-95C9C84068A4@eschatologist.net> Message-ID: <20190627010224.GC1206@mcvoy.com> On Wed, Jun 26, 2019 at 05:22:05PM -0700, Chris Hanson wrote: > On Jun 26, 2019, at 1:21 PM, Larry McVoy wrote: > > > > I also agree with whoever said the Mach guys were trying out all sorts > > of different ideas, that's cool. What's not cool is that when those > > ideas didn't pan out they left in all the substrate that had proven to > > be not needed. > > It seems like you???re still missing the point. I'm not missing anything. Go read this: https://www.cs.ubc.ca/~norm/508/2009W1/mach_usenix86.pdf It talks about how simple Mach is, how it is going to be what Unix wanted to be but Unix got too complicated. Etc. It sounds fantastic, too good to be true and that's exactly what the code is. You can go on all you want about all the cool research it enabled, which I've not disputed other than to say I didn't see much work out. But OK, cool research vehicle, got it. What it is not is the simple awesome system that the papers described. I was super stoked when I read that initial Mach paper, it seemed like they wanted to clean up Unix and they had a plan. I was very hopeful that they were doing that, I agreed with their statements in section 2. Anyone who has read the code would have a hard time reconciling their code with the picture they painted in their papers. And indeed, the Mach supporters have said nothing about the code, other than to say it is a research system and you can't expect clean code. If it had been advertised as that you wouldn't hear a peep out of me. But it was advertised as a clean up of poor choices in Unix, it was advertised as simple and clean. It is anything but that. I've got no problem with prototypes so long as it is clear that's what it is. My disappointment with Mach is I thought they were cleaning things up, that's what they said, that's not what they delivered. My beef is with their false advertising. If they had advertised that this was a research system for exploring OS research, not a production ready system, I'd have been fine. That's not how I read the Mach papers. They made promises that they didn't deliver. With that, I'm done on this topic. I'm not going to convince some people of what I think, and they are not going to convince me of what I think. From tuhs at eric.allman.name Thu Jun 27 10:16:07 2019 From: tuhs at eric.allman.name (tuhs at eric.allman.name) Date: Wed, 26 Jun 2019 17:16:07 -0700 Subject: [TUHS] Craft vs Research (Re: CMU Mach sources? In-Reply-To: <20190626231926.EF38A156E40C@mail.bitblocks.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190625005528.GA11929@wopr> <20190625041806.GL7655@mcvoy.com> <20190626231926.EF38A156E40C@mail.bitblocks.com> Message-ID: <5db5af7f-d11b-2d87-edd9-fa5aae855fb5@neophilic.com> I think Larry is right, but also wrong. I think I can speak from experience. The goal of research is not to produce consumer-ready code, but to explore ideas. Nasty things sometimes happen in that environment. But that doesn't mean that code doesn't have to work. My introduction to coding on a research project was INGRES, at the time the competitor to System R (now DB/2, better known as "anything SQL") from IBM Research. By the very nature of the problem, the main complaint was that "Relational Databases Cannot Work" --- so proving that they could was a major part of the research agenda. At one point (pre-commercial) INGRES stored the telecom wiring diagram of New York City. It wasn't always a pleasant experience, but we learned a lot, mostly happy, most of the time. A lot of our motivation was because real people were using our code to do real work. Had we hung them out in the wind to dry, we wouldn't have gotten that feedback, and frankly I think RDBMS wouldn't have progressed so far and so fast. But when I left INGRES I talked with Mike Stonebraker, who asked me where I thought the project should be going. At that point I thought it was clear that the research objectives had been satisfied, and there was the beginnings of a commercial company to move it forward, so I advised that the old code base (which at that point I had written or substantially modified well over 50%) should be abandoned. Do a new system from scratch, in any language, (and I quote) "even in LISP if that's the right decision." Unfortunately the first version of Postgres was written in LISP --- my breed of humor was apparently unappreciated at that time. But from a research perspective the goal was no longer to produce something that actually worked in the real world, but to explore new ideas, including bad ones. I wasn't involved with Postgres personally, but I think Larry's analysis was essentially correct as I know it. I was extraordinarily lucky to have ended up at Berkeley in the mid-70s when UNIX was just becoming a "thing", and I can assure you that while there were a lot of people who just wanted to get their degrees, there was also a large cadre wanting to produce good stuff that could make peoples' lives better. eric From cmhanson at eschatologist.net Thu Jun 27 11:26:50 2019 From: cmhanson at eschatologist.net (Chris Hanson) Date: Wed, 26 Jun 2019 18:26:50 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190627010224.GC1206@mcvoy.com> References: <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <200E87FD-9B9C-4AEC-B3E1-95C9C84068A4@eschatologist.net> <20190627010224.GC1206@mcvoy.com> Message-ID: On Jun 26, 2019, at 6:02 PM, Larry McVoy wrote: > Anyone who has read the code would have a hard time reconciling their > code with the picture they painted in their papers. And indeed, the > Mach supporters have said nothing about the code, other than to say it > is a research system and you can't expect clean code. Then I’ll say it: I did find the Mach code plenty clean *for what it was trying to accomplish* — providing the layering and abstractions necessary to make extensible what had traditionally been kernel code, including allowing it to be developed out-of-tree. -- Chris From lyndon at orthanc.ca Thu Jun 27 14:01:14 2019 From: lyndon at orthanc.ca (Lyndon Nerenberg) Date: Wed, 26 Jun 2019 21:01:14 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: <20190626202125.GA1206@mcvoy.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> Message-ID: <40c8b56dc61f5026@orthanc.ca> Larry McVoy writes: > OK, I'll bite. Go read the source in the FreeBSD tree, which has been > reduced in size by 60% according to someone on the team. Then come > back and draw me a picture of what it does. Larry, it seems to me your argument is the Mach code should never have been incorporated into BSD in the first place. That's fine, but it's not the Mach developers fault that happened, so maybe you should lay off them for not writing their research software to a production shop standard they were never a part of? --lyndon From ben at cogs.com Thu Jun 27 20:34:51 2019 From: ben at cogs.com (Ben Greenfield) Date: Thu, 27 Jun 2019 06:34:51 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <40c8b56dc61f5026@orthanc.ca> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> Message-ID: <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> > On Jun 27, 2019, at 12:01 AM, Lyndon Nerenberg wrote: > > Larry McVoy writes: > >> OK, I'll bite. Go read the source in the FreeBSD tree, which has been >> reduced in size by 60% according to someone on the team. Then come >> back and draw me a picture of what it does. > > Larry, it seems to me your argument is the Mach code should never > have been incorporated into BSD in the first place. That's fine, > but it's not the Mach developers fault that happened, so maybe you > should lay off them for not writing their research software to a > production shop standard they were never a part of? My understanding is that the BSD layer was a requirement from DARPA. DARPA wanted a “normal” interface to the kernel and BSD was that interface. > > --lyndon From arnold at skeeve.com Thu Jun 27 20:59:34 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Thu, 27 Jun 2019 04:59:34 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> Message-ID: <201906271059.x5RAxZZ4020844@freefriends.org> Larry McVoy writes: > >> OK, I'll bite. Go read the source in the FreeBSD tree, which has been > >> reduced in size by 60% according to someone on the team. Then come > >> back and draw me a picture of what it does. On Jun 27, 2019, at 12:01 AM, Lyndon Nerenberg wrote: > > Larry, it seems to me your argument is the Mach code should never > > have been incorporated into BSD in the first place. That's fine, > > but it's not the Mach developers fault that happened, so maybe you > > should lay off them for not writing their research software to a > > production shop standard they were never a part of? Ben Greenfield via TUHS wrote: > My understanding is that the BSD layer was a requirement from DARPA. > DARPA wanted a “normal” interface to the kernel and BSD was that interface. Yes, Mach had to provide a BSD layer on top, but that's not the source of Larry's gripes. It's the other way around. 4.4 BSD pulled the VM code out of Mach and into BSD to provide mmap and some level of portability off the Vax. From there the Mach code got into FreeBSD. That's what Larry is complaining about and what Lyndon is saying isn't fair to the Mach guys. Thanks, Arnold From ben at cogs.com Thu Jun 27 21:13:05 2019 From: ben at cogs.com (Ben Greenfield) Date: Thu, 27 Jun 2019 07:13:05 -0400 Subject: [TUHS] CMU Mach sources? In-Reply-To: <201906271059.x5RAxZZ4020844@freefriends.org> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> <201906271059.x5RAxZZ4020844@freefriends.org> Message-ID: <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> > On Jun 27, 2019, at 6:59 AM, arnold at skeeve.com wrote: > > Larry McVoy writes: >>>> OK, I'll bite. Go read the source in the FreeBSD tree, which has been >>>> reduced in size by 60% according to someone on the team. Then come >>>> back and draw me a picture of what it does. > > On Jun 27, 2019, at 12:01 AM, Lyndon Nerenberg wrote: >>> Larry, it seems to me your argument is the Mach code should never >>> have been incorporated into BSD in the first place. That's fine, >>> but it's not the Mach developers fault that happened, so maybe you >>> should lay off them for not writing their research software to a >>> production shop standard they were never a part of? > > Ben Greenfield via TUHS wrote: >> My understanding is that the BSD layer was a requirement from DARPA. >> DARPA wanted a “normal” interface to the kernel and BSD was that interface. > > Yes, Mach had to provide a BSD layer on top, but that's not the source > of Larry's gripes. > > It's the other way around. 4.4 BSD pulled the VM code out of Mach and > into BSD to provide mmap and some level of portability off the Vax. From > there the Mach code got into FreeBSD. That's what Larry is complaining > about and what Lyndon is saying isn't fair to the Mach guys. Thank you for this clarification, so this conversation has been going on since the 80’s and gets ignited from time to time. Thank you, Ben > > Thanks, > > Arnold From arnold at skeeve.com Thu Jun 27 21:39:37 2019 From: arnold at skeeve.com (arnold at skeeve.com) Date: Thu, 27 Jun 2019 05:39:37 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> <201906271059.x5RAxZZ4020844@freefriends.org> <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> Message-ID: <201906271139.x5RBdbS0025409@freefriends.org> Ben Greenfield wrote: > Thank you for this clarification, so this conversation has been going > on since the 80’s and gets ignited from time to time. 4.4 was very early 90s, IIRC, but basically, yes. Arnold From imp at bsdimp.com Fri Jun 28 00:58:22 2019 From: imp at bsdimp.com (Warner Losh) Date: Thu, 27 Jun 2019 08:58:22 -0600 Subject: [TUHS] CMU Mach sources? In-Reply-To: <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> References: <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190626024503.GA43970@wopr> <20190626025646.GR925@mcvoy.com> <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> <201906271059.x5RAxZZ4020844@freefriends.org> <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> Message-ID: On Thu, Jun 27, 2019 at 5:13 AM Ben Greenfield via TUHS < tuhs at minnie.tuhs.org> wrote: > > > > On Jun 27, 2019, at 6:59 AM, arnold at skeeve.com wrote: > > > > Larry McVoy writes: > >>>> OK, I'll bite. Go read the source in the FreeBSD tree, which has > been > >>>> reduced in size by 60% according to someone on the team. Then come > >>>> back and draw me a picture of what it does. > > > > On Jun 27, 2019, at 12:01 AM, Lyndon Nerenberg > wrote: > >>> Larry, it seems to me your argument is the Mach code should never > >>> have been incorporated into BSD in the first place. That's fine, > >>> but it's not the Mach developers fault that happened, so maybe you > >>> should lay off them for not writing their research software to a > >>> production shop standard they were never a part of? > > > > Ben Greenfield via TUHS wrote: > >> My understanding is that the BSD layer was a requirement from DARPA. > >> DARPA wanted a “normal” interface to the kernel and BSD was that > interface. > > > > Yes, Mach had to provide a BSD layer on top, but that's not the source > > of Larry's gripes. > > > > It's the other way around. 4.4 BSD pulled the VM code out of Mach and > > into BSD to provide mmap and some level of portability off the Vax. From > > there the Mach code got into FreeBSD. That's what Larry is complaining > > about and what Lyndon is saying isn't fair to the Mach guys. > > Thank you for this clarification, so this conversation has been going on > since the 80’s and gets ignited from time to time. > Yea, there's been three or four rounds of rototilling in the FreeBSD vm. While it shares some structures with its Mach ancestors, complaining about it to paint Mach as this or that is unfair. FreeBSD's sys/vm has had a crapton of changes to make to scale in an MP system, to adopt non-uniform page sizes, etc. Some of these changes have been done with skill and subtly. Some have been done by a ham-fisted goober. It would overstate things to say the most recognizable part of Mach is the copyright headers :), but those bits are arguable the most unchanged. What's resulted lacks architectural purity because it wasn't designed from scratch to be pure. It's grown organically over the last 30-odd years as different groups, companies and organizations have found it necessary to fund development. The SunOS 4.x code, which was almost donated to the BSD project only to be scuttled at the last minute, has the twin advantages of being purpose built for only two architectures and didn't need to scale to thousands of CPUs, and stopped evolving in the 90s. As such, it can maintain its architectural purity since it's not needed to grow and adapt since then. All that "growth" happened in Solaris. So it's also a bit unfair to compare that code which was developed over a decade to FreeBSD's. But yea, DARPA was about networking in the Unix world. BSD was Unix at the time since AT&T didn't have the business structure to do the contracts, and BSD's 2BSD or 3BSD was little more than a slightly more evolved V7 research edition with some really cool user land features and a few more drivers for hardware BSD users had. The Mach VM came late to the game and was never the focus of the DARPA contracts. For another view on how well CSRG integrated Mach into BSD, see NetBSD's uvm, a complete rewrite. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From clemc at ccc.com Fri Jun 28 03:06:12 2019 From: clemc at ccc.com (Clem Cole) Date: Thu, 27 Jun 2019 13:06:12 -0400 Subject: [TUHS] Craft vs Research (Re: CMU Mach sources? In-Reply-To: <5db5af7f-d11b-2d87-edd9-fa5aae855fb5@neophilic.com> References: <8D0B5B0D-9956-47D7-8D36-1729BB1E1DA9@eschatologist.net> <5df8c6f6-2768-4bfb-9c47-3345098078a7@PU1APC01FT048.eop-APC01.prod.protection.outlook.com> <20190625000630.GA7655@mcvoy.com> <20190625003120.GA28608@mit.edu> <20190625004523.GB7655@mcvoy.com> <20190625005528.GA11929@wopr> <20190625041806.GL7655@mcvoy.com> <20190626231926.EF38A156E40C@mail.bitblocks.com> <5db5af7f-d11b-2d87-edd9-fa5aae855fb5@neophilic.com> Message-ID: On Wed, Jun 26, 2019 at 9:12 PM wrote: > I think Larry is right, but also wrong. I think I can speak from > experience. > +1 > > The goal of research is not to produce consumer-ready code, but to > explore ideas. Nasty things sometimes happen in that environment. > > But that doesn't mean that code doesn't have to work. And BTW, Mach is an example of something that did work. And it worked "good enough" -- I think Ted's comments follow exactly these ideas. > My introduction to coding on a research project was INGRES, at the time > the competitor to System R (now DB/2, better known as "anything SQL") > from IBM Research. By the very nature of the problem, the main complaint > was that "Relational Databases Cannot Work" --- so proving that they could was > a major part of the research agenda. > > At one point (pre-commercial) INGRES stored the telecom wiring diagram of > New York City. It wasn't always a pleasant experience, but we learned a > lot, mostly happy, most of the time. A lot of our motivationwas because > real people were using our code to do real work. Had we hung them out in > the wind to dry, we wouldn't have gotten that feedback, and frankly I > think RDBMS wouldn't have progressed so far and so fast. > > But when I left INGRES I talked with Mike Stonebraker, who asked me > where I thought the project should be going. At that point I thought it > was clear that the research objectives had been satisfied, and there was the > beginnings of a commercial company to move it forward, so I advised that > the old code base (which at that point I had written or > substantially modified well over 50%) should be abandoned. Do a new system > from scratch, in any language, (and I quote) "even in LISP if that's the > right decision." Unfortunately the first version of Postgres > was written in LISP --- my breed of humor was apparently unappreciated at > that time. But from a research perspective the goal was no longer to produce > something that actually worked in the real world, but to explore new > ideas, including bad ones. I wasn't involved with Postgres personally, > but I think Larry's analysis was essentially correct as I know it. > > I was extraordinarily lucky to have ended up at Berkeley in the mid-70s when > UNIX was just becoming a "thing", and I can assure you that while there > were a lot of people who just wanted to get their degrees, there was also > a large cadre wanting to produce good stuff that could make peoples' > lives better. > Well said thanks, Clem -------------- next part -------------- An HTML attachment was scrubbed... URL: From lm at mcvoy.com Fri Jun 28 03:25:08 2019 From: lm at mcvoy.com (Larry McVoy) Date: Thu, 27 Jun 2019 10:25:08 -0700 Subject: [TUHS] CMU Mach sources? In-Reply-To: References: <20190626151143.GC3116@mit.edu> <20190626174431.GT925@mcvoy.com> <34DB62C2-7D8C-468B-99E1-CA035C9141A2@eschatologist.net> <04827B96-5B5E-473E-A95F-67C4292B69D1@cogs.com> <20190626202125.GA1206@mcvoy.com> <40c8b56dc61f5026@orthanc.ca> <8249B608-606A-4A32-8D56-D78F16BC217B@cogs.com> <201906271059.x5RAxZZ4020844@freefriends.org> <648F2E41-08C4-4A76-9AD0-D69F851EF0D7@cogs.com> Message-ID: <20190627172508.GC20201@mcvoy.com> On Thu, Jun 27, 2019 at 08:58:22AM -0600, Warner Losh wrote: > The SunOS 4.x code, which was almost donated to the BSD project only to be > scuttled at the last minute, has the twin advantages of being purpose built > for only two architectures and didn't need to scale to thousands of CPUs, > and stopped evolving in the 90s. As such, it can maintain its architectural > purity since it's not needed to grow and adapt since then. All that > "growth" happened in Solaris. So it's also a bit unfair to compare that > code which was developed over a decade to FreeBSD's. Yeah, I actually agree with this. The SunOS I love so much didn't scale at all. Which means it was inherently more simple and easier to understand. From wkt at tuhs.org Fri Jun 28 09:07:55 2019 From: wkt at tuhs.org (Warren Toomey) Date: Fri, 28 Jun 2019 09:07:55 +1000 Subject: [TUHS] Test, please ignore Message-ID: <20190627230755.GA28404@minnie.tuhs.org> This e-mail going to the TUHS and Dave Horsfall to see if he's getting the mail from the list. Please ignore. Warren