Categories
LUG

Trusting Linux

It is November 2018. I was searching for articles on linux security and came across reproducible builds here :- https://www.privateinternetaccess.com/blog/2018/07/reproducible-builds-solving-an-old-open-source-problem-to-improve-security/ “Inconsistent Compiling Environments Lead to Inconsistent Behavior. different compilers will produce different machine code, even from identical source code. These inconsistencies are one of the many sources of unreliable software, because these different compiled applications all have different behavior and different introduced problems. An app that behaves perfectly when compiled one way may not work at all with another compiler. Even worse, because of all of the different combinations of source code, hardware, compiler versions, settings, and other environmental factors, it becomes extremely hard to identify if your software is fundamentally secure.

Is this source code compromised? Another problem introduced by inconsistent software is that it is impossible to tell if the software that is coming out of your compiler app is exactly what the author intended. A malicious compiler app could create machine code that has intentional vulnerabilities and enable surveillance of systems or outright takeover of a secure system. The malicious compiler problem has been one that has been discussed since the 1980’s. Ken Thompson famously made a proof-of-concept compiler hack that was not only itself compromised, but it would compromise every update to itself and be virtually impossible to detect. http://wiki.c2.com/?TheKenThompsonHack The most interesting piece of this proof-of-concept discusses how this wouldn’t even have to be a specific type of compiler. Other pieces of machine code, like bootloaders and the most fundamental software and firmware on a computer, could be compromised to introduce backdoors system-wide and be extremely hard to detect. This is why there are entire movements (CoreBoot and LibreBoot https://www.coreboot.org/) behind moving all of these fundamental pieces to open-source solutions. It removes the most likely place for something like this to hide.”

When I moved on and searched for reproducible builds, most of the hits were from 2017, like this:-https://www.reddit.com/r/linux/comments/6p14q0/debians_archive_is_up_to_94_for_reproducible_build/ and this:-https://news.ycombinator.com/item?id=13690703 leading me to think the initiative had stalled. But then I found the reproducible builds weekly blog:- https://reproducible-builds.org/blog/posts/187/ which is now on week 187 and going strong.

Some of the arguments against reproducible builds go along the lines of “what’s the point if the hardware is compromised?” the next link appears to be addressing some of the hardware issues:- https://puri.sm/learn/intel-me/

On one of the weekly (reproducible-builds.org) posts I found a reference to a blog post: https://puri.sm/posts/protecting-the-digital-supply-chain/ After reading this post, I discovered Purism the company, who are making motherboards with the intel me disabled, using programmable fuses: https://puri.sm/learn/intel-me/

https://www.reddit.com/r/linux/comments/540nmm/making_reproducible_builds/ :- “Nix/Guix have less packages to deal with, and although their infrastructure is already suited to r-b, most of the work for r-b is getting upstream projects to accept patches. Fixing the build environment is not enough although it gets Nix/Guix quite a lot of the way there – (1) we at Debian think this is “cheating”, I can go into this in some more detail but I thought I’d keep this post short and (2) it doesn’t work in all cases – e.g. for a build process that takes 2000 +/- 100 seconds and sphinx/doxygen is run at the end of it, embedding timestamps you can’t control. There is nothing Nix/Guix/anyone else can do about this, except to patch upstream. And most of the work we’re doing at Debian is patching upstream, which will eventually benefit everyone including Nix/Guix. Also the biggest goal for r-b is to actually have people upload attestations that “I built source X to get binary hash Y”. It is only a very marginal security improvement to “allow people theoretically reproduce the binary” because this only gives security to people who actually do rebuilds – but if we assume everyone does this, then we might as well all switch to Gentoo or some source-based distro. No, we need to distribute attestations so that even people who can’t rebuild everything can benefit. I don’t see Nix/Guix working on this; we are building background infrastructure (plus theoretical research) to eventually be able to do this.”

https://www.reddit.com/r/linux/comments/6p14q0/debians_archive_is_up_to_94_for_reproducible_build/ Debian’s Archive Is Up To 94% For Reproducible Build

So, what are the main types of non-reproducible packages in that 6%? I don’t see specifics from a skim of https://reproducible-builds.org or https://wiki.debian.org/ReproducibleBuilds , is it specific groups of packages that aren’t reliably reproducible or is it a global thing where packages just chronically have reproducibility problems every so often?

This seems to be the current list of unreproducible packages in the Debian unstable: https://tests.reproducible-builds.org/debian/unstable/index_dd-list.html

here’s a detailed article on an effort to make a large codebase reproducible http://blog.netbsd.org/tnf/entry/netbsd_fully_reproducible_builds

https://news.ycombinator.com/item?id=14834386%20:- Status update from the Reproducible Builds project (debian.org) 322 points by lamby on July 23, 2017 | hide | past | web | favorite | 86 comments :- “Guix and Nix are input-reproducible. Given the same input description (input being the source files and any dependencies) an output comes out. Builds are then looked up in a build cache based on the hash of al lathe combined inputs. However. The _output_ of Nix artifacts are not reproducible. Running the same input twice will yield a different result. https://www.reddit.com/r/NixOS/comments/2n926h/get_best_of_both_worlds_guix_vs_nixos/ Nix does some tricks to improve output reproducibility like building things in sandboxes with fixed time, and using tarballs without modification dates but output bit-by-bit reproducible is not their goal. They also don’t have the manpower for this. Currently, a build is built by a trusted builderver for which you have the public key. And you look up the built by input hash but have no way to check if the thing the builderver is serving is legit. It’s fully based on trust. However, with debian putting so much effort in reproducible output, Nix can benefit too. In the future, we would like to get rid of the ‘trust-based’ build servers and instead move to a consensus model. Say if 3 servers give the same output hash given an input hash, then we trust that download and avoid a compile from source. If you still don’t trust it, you can build from source yourself and check if the source is trustworthy. Currently, a build is built by a trusted builderver for which you have the public key. And you look up the built by input hash but have no way to check if the thing the builderver is serving is legit. It’s fully based on trust. However, with debian putting so much effort in reproducible output, Nix can benefit too. In the future, we would like to get rid of the ‘trust-based’ build servers and instead move to a consensus model. Say if 3 servers give the same output hash given an input hash, then we trust that download and avoid a compile from source. If you still don’t trust it, you can build from source yourself and check if the source is trustworthy.”

For and against argument for reproducible builds:- https://news.ycombinator.com/item?id=13690703 NetBSD fully reproducible builds (netbsd.org)

“aseipp on Feb 21, 2017 [-]

NetBSD can build the whole OS from source tree to distribution media with a single command.

This one, at least, can be done in NixOS/Guix once you check out the source – and the Nix package manager can technically be installed on any Linux distro, too (and some other ports to Cygwin/FreeBSD/Mac etc) and run a single command to get the ISO, or any kind of build product you want. The carefully tested and maintained portability/cross-compilation is another thing though: NetBSD has fantastic support here that is not easily replicated without just doing a ton of work. So its universal basically-always-works cross compilation, everywhere – is rather unique here. You can’t build NixOS ISOs natively e.g. Nix-on-Darwin, which is rather unfortunate. ”

https://www.schneier.com/blog/archives/2006/01/countering_trus.html In 2006, Bruce Schneier blogged a pretty good breakdown of a paper by David A. Wheeler https://dwheeler.com/trusting-trust/ on defending against Thompson’s specific example. The paper itself is still paywalled as of this time, to the best of my knowledge. The Wheeler paper is very interesting, but it is focused on the bowels of compiler design, and this question seems more focused on end-user precautions than compiler design or even systems programming. There are generally two ways we understand the risk involved with compiling a specific piece of code:

1. Authenticating the code as a true, untampered-with piece of code written by someone whom we have chosen to trust. 2. Closely examining the content of the code itself, and thoroughly understanding what it does.

The second case–a thorough code audit–is a huge, long, resource-intensive task. It almost never really happens for codebases of nontrivial size, because it is simply too costly. Much more often, we are looking at the first case: trusting the coder, and validating that the code hasn’t been tampered with between the coder and the consumer. Of course, in the end, as others have pointed out, these integrity checks only do one any good if the developers’ infrastructure hasn’t been compromised, if the developer was coding well, and so on.

http://entrop-x.com/index.php/en/cyber-security/40-verifying-the-integrity-of-a-compiler

https://itsfoss.com/install-software-from-source-code/

http://entrop-x.com/index.php/en/cyber-security/40-verifying-the-integrity-of-a-compiler

https://softwareengineering.stackexchange.com/questions/184874/is-ken-thompsons-compiler-hack-still-a-threat

https://askubuntu.com/questions/28372/how-do-i-get-and-modify-the-source-code-of-packages-installed-through-apt-get

http://www.linux-mag.com/id/976/ compiling and linking

https://www.linuxquestions.org/linux/answers/Programming/Building_C_programs_on_Linux

Categories
LUG Shared

MINIX — The most popular OS in the world, thanks to Intel https://buff.ly/2zQAbg6

https://buff.ly/2zQAbg6

Categories
LUG podcasts

Hacker Public Radio

HPR is a podcast community for those that want to have a go at podcasting but don’t have the resources or skills to be able to go it alone from the start (although some of the regular contributors do also have their own independent podcasts). Setting up and managing a podcast site is quite a commitment if your not sure if you want to do more than the occasional show, and as it already has regular listeners you’re sure to have someone listen to what you record.
HPR releases a new show every day Monday to Friday but these are all from shows made by the community members, and can be of any topic of interest to the contributors. As the title suggests many of the shows are technology related, but they don’t have to be. Recent shows have covered, Repairing a Truck, brewing beer and recording a band. So for example if your into making Jam and want to share how to do this on a podcast you can.
A few weeks ago one of the Volunteers who manage the Web site, posted a podcast explaining that there were only a few shows left in the queue for publication and if things didn’t improve there was a danger that without any future content the site would have to close. This was the rallying cry I needed to get off my backside and record something for HPR.
It’s not my first go at podcsting as I was a member of the fullcirclemagazine.org podcast for a few months with some other members of this Makerspace/LUG, and had also done a one off podcast with, Dan lynch (Linux outlaws), Pete Cannon (The Dick Turpin Road show), Les Pounder (fullcircle podcast) and Heeed, which was released as HPR episode 0844 called ‘The Flying Handbag’. We recorded this show during Blackpool Barcamp in 2011. It’s quite funny although slightly adult content, so if you want to listen its in the HPR archive.
However as far as recording myself alone talking (or rambling) about something I wanted to share with an audience, was something I’d not done before.
The biggest fear I had was the perceived difficulty getting it in a format to broadcast but it’s all taken care of by the community volunteers, all I had to do was record my show, (I chose to share about how I started to use Linux, as my first show) and then choose when I wanted it to be aired, follow the instructions on the upload page, and the rest is done for you.
So if you’ve ever been tempted to have a go at a podcast but didn’t know how to go about it give it a go via the HPR podcast community
http://hackerpublicradio.org/index.php
Watch out Blackpool Makerspace attendees I’ll be bringing the Zoom H2 to meetings and trying to encourage people to do interviews for future HPR shows.
This is an edited version of a post on my occasional blog at:
http://tony-hughes.blogspot.co.uk/
Categories
LUG

Linux Presentation Day at Blackpool LUG -April 2016

Nine people attended during the course of the day, and  Linux was discussed in detail.

Multiple Linux distributions were on display: Slackware, Debian, Suse, Fedora, Ubuntu, Lubuntu and one of the highlights, Linux Mint running on a quad core Raspberry Pi.

We had four computers to give away with Linux Mint installed (available free of charge), but there were no takers. We will have to try harder at the next LPD in October.

 

2016-04-30 14.52.46

Pizza fueled Linux presentation day.

2016-04-30 14.53.08

Biscuits and project boxes

2016-04-30 14.53.18

Apple pie cookies and boxes of Raspberry Pi

 

Categories
LUG

Is Linux being helped or hijacked by corporate involvement?

Is Linux being helped or hijacked by corporate involvement? AKA has Linux lost its way?

Who knows, but here are some thoughts:

Linux started as a student project and gathered an enthusiastic band of volunteers …..but look at it now.
http://www.theregister.co.uk/2015/02/18/who_writes_linux_2015/
“The Linux kernel is growing and changing faster than ever, but its development is increasingly being supported by a select group of companies, rather than by volunteer developers.

That’s according to the latest survey of Linux kernel of development by the Linux Foundation, which it published to coincide with the kickoff of this year’s Linux Foundation Collaboration Summit on Wednesday.
Whether the decline in volunteer code contributions since Linux’s early days is actually a bad thing, however, is open to debate.

For one thing, kernel development is something of a rarified skill, and coders who successfully submit patches probably won’t stay unemployed for long. Now they’re volunteers; now they aren’t.

Also, the Linux kernel has hardly been taken over by some Good Ol’ Boys network of top IT companies. One developer who consistently makes the list of top kernel contributors, for example, is H Hartley Sweeten of Vision Engraving Systems, a maker of industrial engraving equipment.

Similarly, the Linux Foundation announced on Wednesday that its latest member is media giant Bloomberg, which has joined as Gold member and says it will “continue to take on a more prominent role in the broader community development and collaboration behind Linux.”
from the comments on this page:
https://mjg59.dreamwidth.org/39546.html

Is this trend isolated or common?
Date: 2016-01-22 11:51 pm (UTC)
From: (Anonymous)
So far I count:
– Linux Foundation quietly dropped community representation.
– The Radeon related conspiracies (I didn’t look at it in depth yet).
– The libusb related conspiracy (See Peter Stuge’s talk at 32C3).
– The X.org foundation corporate membership limit change attempt.
Is there other examples of such patterns that I missed?
Are theses isolated incidents? Or are they part of a bigger picture?

If it is, I can only think of corporate control over free software projects, but why?
I guess free software companies wouldn’t benefit from it.
However I think that the proprietary software companies would. They nowadays depend on free software so they can’t kill it, they probably don’t want to either.
However controlling the associations and leveraging such control could be used to help prevent free software from replacing their proprietary products.

Here I’m only wondering if something is happening, and I don’t have any answers.

Link Reply Thread Hide 1 comment

Re: Is this trend isolated or common?
Date: 2016-01-23 12:18 am (UTC)
From: (Anonymous)
Free software has always been a threat to the “capitalist” business model espoused by the big corporations. This model has no room for products that threaten their high profit margins, so they always attempt to buy or hijack the problem people and products. An example from the dark side is Mark Russinovich being bought off by Microsoft after the Sony rootkit affair.

Another way to look at the Linux Foundation is that we have isolated the problem to a small place and made the corporates pour their money into a different rat hole, but we have to act on that approach, perhaps by forking the kernel and making the community version the important one, removing the Linux Foundation’s influence over the real world by simple community action.

While this approach would seem cruel in that Torvalds would be shorn of his halo, in fact devolving the “governance” of the Linux kernel would serve as a way of keeping him honest, and potentially improve the overall product. Just like all of the MySQL forks forced Oracle to be honest, so would a hurd of Linux forks force “Linux” back to the real world.

https://en.wikipedia.org/wiki/Linux_kernel :-

People like Linus Torvalds and I don’t plan the kernel evolution. We don’t sit there and think up the roadmap for the next two years, then assign resources to the various new features. That’s because we don’t have any resources. The resources are all owned by the various corporations who use and contribute to Linux, as well as by the various independent contributors out there. It’s those people who own the resources who decide…
— Andrew Morton, 2005

Linux is evolution, not intelligent design
— Linus Torvalds, 2005[122][123]
http://www.zdnet.com/article/linux-foundation-leadership-controversy-erupts/
“The real question behind the debate, as I see it, is who controls The Linux Foundation? The users or the companies?

Garrett sees this move as The Linux Foundation taking one more step away from the community and towards the corporate world. Zemlin doesn’t address this point specifically but, tellingly, he does say that the “process for recruiting community directors should be changed to be in line with other leading organizations in our community and industry.”

In addition, as Garrett pointed out, individuals no longer longer have “The ability to run for and vote for a Linux Foundation board seat and influence the direction of the foundation.”

Personally, I see this as a move towards more corporate control of the Foundation. But, as the saying goes, who pays the piper calls the tune. I find nothing surprising about this move.

While open-source users love the concept of community, the “community” has been made up of corporate executives and employees for well over a decade now. Only the most idealistic open-source developer and leaders and, ironically, open source’s most fervent enemies still think of Linux and open-source projects being created and controlled by private individuals.

Besides, the overwhelming majority of The Linux Foundation board of directors has always been made up of corporately chosen directors. Still, this Linux Foundation decision rubs me the wrong way. Linux started as an individual’s project that quickly gathered the support of many bright programmers. There should always be a place for individuals rather than corporations to have their say in The Linux Foundation’s leadership.

I hope Sandler, who is a strong, brilliant open-source leader, not only is allowed to run for office, but wins a place on the board. I also hope the Foundation restores the right for individuals to vote and run for office on the board. This is not asking for much, and it would restore faith that the Foundation still has room left for the little people and not just the big companies.”
http://www.linuxuser.co.uk/features/systemd-for-better-or-worse
“They” tried, for years, to destroy Linux. “Only hackers use it”, “only hippies use it”, “only communists or terrorists use it”, “we own patents for most of it” and each one failed. Now they’re attacking it from within and it’s worked beautifully. One community torn asunder over systemd. Most distros now firmly in the palm of Red Hat and thus under their control. The modularity and control that distinguished Linux from other OS’s, now mostly gone and by the time Poettering has finished, it will all be gone. And then it will be too late.
Thankfully there are still some distros holding out – Slackware, Crux, Pisi, Manjaro OpenRC and Devuan if it gets off the ground. Long may they continue to resist. But I don’t hold out much hope in the long run. This is Corporate takeover 101 and so few even see what’s happening that the chances of stopping it are next to zero. Sad.
http://embedded-computing.com/articles/the-linux-revolution-just-keeps-advancing-heres-why/
A cornerstone of Linux’s success is its huge user community. Since 2005, some 11,800 individual developers from nearly 1,200 different companies have contributed to the kernel, the Linux Foundation says. Linux is the largest collaborative development project in history and it is being developed faster than any other software in the world.

And now Linux is accelerating tech innovation via open collaboration at all levels – from the chip and on up through the entire hardware and software stacks.
http://www.infoworld.com/article/2905331/open-source-software/the-new-struggles-facing-open-source.html
Ultimately, open source isn’t about code. It’s about community, and as Bert Hubert suggests, “community is the best predictor of the future of a project.” That community isn’t fostered by jerk project leads or corporate overlords pretending to be friendly foundations. It’s the heart of today’s biggest challenges in open source — as it was in the last decade.

The Linux model inspired IBM, NVIDIA, Mellanox, Google, and Tyan to create the OpenPOWER initiative in December 2013. OpenPOWER does for hardware what Linux has done for software: makes it free and open source

http://www.wired.com/2015/02/nodejs-foundation/
it has become increasingly common for companies to maintain control of important open source tools.

That can make for more efficient decision making. But as we’ve seen with Node, it can also lead to tensions between the parent company and outside developers who adopt and develop the technology. How the Node community deals with these tensions could set important precedents for how other important open source technologies, such as the cloud computing tool Docker, are managed.

Categories
LUG Makerspace meetings

Meeting 12th September 2015

Attending: Mike Hull, Mike Hewitt, Ricky and Arthur.

Basement update.

With the deadline for opening in October getting ever closer, we started work at 10 today and finished at 6.

Washing machine and kitchen sink taken out of the area which will become the private entrance

Floor insulation carried in from storage, ready for going down on the floor 

Insulation down and floor boarding going down on top.

Insulation going down in the private entrance.

One piece of floor boarding left to do.

And the floor boarding is finished!

And it is still a nice sunny day in Blackpool as I head home at 6pm.

Categories
LUG Makerspace meetings

Meeting Saturday 5th September 2015

Attending: Mike Hull, Ricky, Geoff and Ted.

Basement update

More of the floor is down.
In the background, the opening in the wall will become a separate/private entrance for members to  get in to the basement space.

Categories
LUG Makerspace meetings

Meeting 29th August 2015

Attending: Mike Hull, Mike Hewitt, Ricky, Tony, Ted and Ollie.

Our application to the Crafts Council to participate in make-shift-do has been accepted and we are hoping to have our basement space ready for the event in October.

Basement space update

Ted putting plasterboard on over the wall insulation.

Ollie starts ‘tanking’  the walls at the other end of the basement.
When Ted  and Tony finished putting the  plasterboard on the wall, all the furniture was moved up to the end of the room where they had been working.

 With more floor space clear and ready for ‘tanking’ to be applied, we finished for the day.

Update 2

Mike and Ricky spent Bank Holiday Monday finishing off the ‘tanking’ on the walls and floor. 
Next weekend the floor insulation can be started.

Categories
LUG Makerspace meetings

Blackpool LUG&Makerspace meeting 22nd August 2015

Attending: Mike Hull, Mike Hewitt, Ricky, Tony, Les, Martin, Arthur and James.

Basement space update

The shower/washroom/toilet cubical partition wall is in place and has sound insulation behind the plasterboard skin. 
All the copper water pipes are in place and tested for leaks. 
Cabling for mains sockets, lighting and networking are being put into place.
From next week, there is going to be a major push forward in an attempt to make the space usable for the “make:shift:do” event in October, 

More information can be found here: www.make-shift-do.org.uk<http://www.make-shift-do.org.uk> and applications can be made via the Crafts Council website or accessed directly here https://www.craftscouncil.org.uk/account/application-make-shift-do – you must create a Crafts Council log-in to apply.
James demonstrated his new HP Pro stylus 8 Android tablet.
The HP Pro stylus 8 can do a really neat trick.
Use the stylus to write or draw on a standard paper notebook placed next to the screen, and whatever is written/drawn, appears on the screen.
The handwriting recognition is really good as well.
Arthur has built a one armed robot:-
“I have finished the hardware side for a simple one servo Kalman filter testing robot.  It’s an inverted pendulum with the accelerometer/gyro at the top of the arm.  I have a second identical robot that just needs the electronics adding, the idea is I’ll have one and I’ll give the other to Martin and his son to play with.  Once the software is written for it I’ll really get into trying to learn the maths.”

Categories
Hardware LUG

Windows 10 IOT core, who/what is it for?

Windows 10 IOT core (internet of things), who/what is it for?

I bought the latest Raspberry Pi to try Windows 10 IOT core, and wrote about it here:-
http://blackpoollug.blogspot.co.uk/2015/07/meeting-11th-july-2015-pi-windows-10.html
I was disappointed to say the least.

The following post on Hackaday concludes that Windows 10 IOT is not for makers/hackers.
 http://hackaday.com/2015/08/13/raspberry-pi-and-windows-10-iot-core-a-huge-letdown/

While Windows 10 IoT Core is great for any company that has a lot of Visual Basic and other engineering debt, it’s not meant for hackers, makers, or anyone building something new”

Windows 10 IoT Core is a beginning, and should be viewed as such. It’s there for those who want it, but for everyone else any one of a dozen Linux distributions will be better.