Right to repair

Dear Blackpool Makerspace,

I am getting in touch with you on behalf of The Restart Project, a London-based charity and social enterprise that aims to fix our relationship with electronics.

Last year, we held Fixfest UK, a UK-wide event in Manchester with participants from 25 community repair groups from across the country.

Together, we drafted [The Manchester Declaration](https://manchesterdeclaration.org/), calling policymakers and companies for more repairable products.  Over 30 community repair groups from around the UK have added their voices as signatories, alongside organisations like Greenpeace, Keep Britain Tidy and more.

Any UK repair group, whether fixing electricals or other products, is invited to sign and show their support, as well as any ally (makerspace, school, company). Based on your work, we think your organisation may be interested in singing and joining the movement too. Together, we can all push for our Right to Repair in the UK.

We look forward to hearing from you,

Isabel Lopez , Communications Assistant,  @The Restart Project

A RISC-V CPU For Eight Dollars

New Part Day: A RISC-V CPU For Eight Dollars

The big deal here is the Sipeed MAix-I module with WiFi, sold out because it costs nine bucks. Inside this module is a Kendryte K210 RISC-V CPU with 8MB of on-chip SRAM and a 400MHz clock. This chip is also loaded up with a Neural Network Processor, an Audio Processor with support for eight microphones, and a ‘Field Programmable IO array’, which sounds like it’s a crossbar on the 48 GPIOs on the chip. Details and documentation are obviously lacking.

RISC Vs CISC – Why Instruction Sets no longer matter

For a long time I believed RISC (reduced instruction set computing) was superior to CISC (complex instruction set computing) by being faster and more efficient, this article suggests otherwise.
https://ethw.org/Why_Instruction_Sets_No_Longer_Matter

“Having looked briefly at some samples of both the RISC and CISC instruction sets let us return now to the question at hand: which of these competing firmware architectures is actually better?

The answer, it turns out, is neither.

For a few years, yes, it seemed like RISC architectures such as SPARC really were delivering on their promise and outperforming their CISC machine contemporaries. But Robert Garner, one of the original designer of the SPARC argues, compellingly, that this is more of a case of faulty correlation. The performance that RISC architectures were achieving was attributed to the nature of the instruction set, but in reality it was something else entirely–the increasing affordability of on-chip memory caches which were first implemented on RISC machines.[8] When queried on the same issue Peter Capek, who developed the Cell processor jointly for IBM and Sony, concurs: it was the major paradigm shift represented by on-chip memory caches, not the instruction set that mattered most to RISC architectures.[9]

RISC architectures, like the Sun SPARC were simply the first to take advantage of the dropping cost of cache memory by placing it in close proximity to the CPU and thus “solving” or at least creatively assuaging one of the fundamental remaining problems of computer engineering–how to fix the huge discrepancy between processor speed and memory access times. Put simply, regardless of instruction set, because of the penalties involved, changes in memory hierarchy dominate issues of microcode.

In fact, both Garner and Capek argue that RISC instruction sets have in the last decade become very complex, while CISC instruction sets such as the x86 are now more or less broken down into RISC-type instructions at the CPU level whenever and wherever possible. Additionally, once CISC architectures such as x86 began to incorporate caches directly onto the chip as well, many of the performance advantages of RISC architectures simply disappeared. Like a rising tide, increases in cache sizes and memory access speeds float all boats, as it were.

Robert Garner cites both this trend and the eventual development of effective register renaming solutions for the x86 (which work around the smaller 8 register limit on that architecture and thus allow for greater parallelism and better out-of-order execution) as the “end of the relevance of the RISC vs. CISC controversy”.[8] “

Vulca Makers mobility program visit Blackpool Makerspace

“Vulca’s main objective is to understand the impact of makers & hackers on mobility throughout Europe.

To demonstrate this impact, we plan to launch a research project about makers & hackers and their mobility via several networks, including Fab Labs, Biolabs, Makerspaces & Hackerspaces.”

“With hundreds of spaces and thousands of citizens involved in the Makers/Hackers movement, VULCA will soon be working on a PILOT PROJECT in collaboration with the European Commission. Pioneers of the Makers/Hackers Mobility in Europe, we invite you to join the 3rd VULCA Seminar in POZNAN. We have organised this seminar in close collaboration with the Hack’Mak’Fab’ Poland network and Zaklad Makerspace.”

2 – 4 May Vulca seminar :- https://vulca.eu/seminar_poznan/

First contact:-

Hi  Makerspace FY1,

We have seen your page on https://blackpoolmakerspace.wordpress.com/, so we have a question is MakerspaceFY1 is still alive and the Makers/Hackers community active in this part of Blackpool?

Just a quick introduction about us:
Vulca Makers Mobility Program is an Association created in 2015

We are starting 2019 with a ninth VULCA TOUR after 3 years traveling all over Europe.
Our team for this ninth VULCA TOUR is based on 2 Volunteers makers from France. I (Thomas Sanz) and Alexandre Rousselet.

Vulca is an initiative to facilitate the exchange of makers within the network of Fab Labs / Makerspace / Biolabs / Hackerspaces in Europe.
We are actively working to study and prove the Makers Mobility Impact. (Based on Skills and knowledge transfer, Soft Skills, and so on …).

Behind this European association, nearly 100 volunteers are working on that. They visited nearly 250 spaces these last few years and 2 members of our team are actually passing in UK.

This is our Agenda for this expedition:

1)  IRELAND – 14th to 24th January
(Dublin, Cork, Limerick, Sligo, Ballyshannon, Manorhamilton, Derry, Belfast)

2) SCOTLAND – 25th to 30th January 

(Glasgow, Dundee, Aberdeen, Findhorn, Edinburgh, Glasgow).

3) ENGLAND + WALES – 31st January to 28 February

Cockermouth + Dumfries – 31th

Sunderland + Newcastle upon Tyne – 1st

Airedale + Leeds + York + Hebden Bridge – 2/3rd

Lancaster + BlackPool + Preston – 4/5th

Work in progress

Can we come to Blackpool and meet you the 4th?
(When it suits for you the best)


If yes, when is it possible to meet you exactly? In attachment some picture of us in Ireland these last few days to make this email more personal and maybe you’ll see some familiar faces…

Have a good day.

Best regards.

Alex” & Thom”

Trusting Linux

It is November 2018. I was searching for articles on linux security and came across reproducible builds here :- https://www.privateinternetaccess.com/blog/2018/07/reproducible-builds-solving-an-old-open-source-problem-to-improve-security/ “Inconsistent Compiling Environments Lead to Inconsistent Behavior. different compilers will produce different machine code, even from identical source code. These inconsistencies are one of the many sources of unreliable software, because these different compiled applications all have different behavior and different introduced problems. An app that behaves perfectly when compiled one way may not work at all with another compiler. Even worse, because of all of the different combinations of source code, hardware, compiler versions, settings, and other environmental factors, it becomes extremely hard to identify if your software is fundamentally secure.

Is this source code compromised? Another problem introduced by inconsistent software is that it is impossible to tell if the software that is coming out of your compiler app is exactly what the author intended. A malicious compiler app could create machine code that has intentional vulnerabilities and enable surveillance of systems or outright takeover of a secure system. The malicious compiler problem has been one that has been discussed since the 1980’s. Ken Thompson famously made a proof-of-concept compiler hack that was not only itself compromised, but it would compromise every update to itself and be virtually impossible to detect. http://wiki.c2.com/?TheKenThompsonHack The most interesting piece of this proof-of-concept discusses how this wouldn’t even have to be a specific type of compiler. Other pieces of machine code, like bootloaders and the most fundamental software and firmware on a computer, could be compromised to introduce backdoors system-wide and be extremely hard to detect. This is why there are entire movements (CoreBoot and LibreBoot https://www.coreboot.org/) behind moving all of these fundamental pieces to open-source solutions. It removes the most likely place for something like this to hide.”

When I moved on and searched for reproducible builds, most of the hits were from 2017, like this:-https://www.reddit.com/r/linux/comments/6p14q0/debians_archive_is_up_to_94_for_reproducible_build/ and this:-https://news.ycombinator.com/item?id=13690703 leading me to think the initiative had stalled. But then I found the reproducible builds weekly blog:- https://reproducible-builds.org/blog/posts/187/ which is now on week 187 and going strong.

Some of the arguments against reproducible builds go along the lines of “what’s the point if the hardware is compromised?” the next link appears to be addressing some of the hardware issues:- https://puri.sm/learn/intel-me/

On one of the weekly (reproducible-builds.org) posts I found a reference to a blog post: https://puri.sm/posts/protecting-the-digital-supply-chain/ After reading this post, I discovered Purism the company, who are making motherboards with the intel me disabled, using programmable fuses: https://puri.sm/learn/intel-me/

https://www.reddit.com/r/linux/comments/540nmm/making_reproducible_builds/ :- “Nix/Guix have less packages to deal with, and although their infrastructure is already suited to r-b, most of the work for r-b is getting upstream projects to accept patches. Fixing the build environment is not enough although it gets Nix/Guix quite a lot of the way there – (1) we at Debian think this is “cheating”, I can go into this in some more detail but I thought I’d keep this post short and (2) it doesn’t work in all cases – e.g. for a build process that takes 2000 +/- 100 seconds and sphinx/doxygen is run at the end of it, embedding timestamps you can’t control. There is nothing Nix/Guix/anyone else can do about this, except to patch upstream. And most of the work we’re doing at Debian is patching upstream, which will eventually benefit everyone including Nix/Guix. Also the biggest goal for r-b is to actually have people upload attestations that “I built source X to get binary hash Y”. It is only a very marginal security improvement to “allow people theoretically reproduce the binary” because this only gives security to people who actually do rebuilds – but if we assume everyone does this, then we might as well all switch to Gentoo or some source-based distro. No, we need to distribute attestations so that even people who can’t rebuild everything can benefit. I don’t see Nix/Guix working on this; we are building background infrastructure (plus theoretical research) to eventually be able to do this.”

https://www.reddit.com/r/linux/comments/6p14q0/debians_archive_is_up_to_94_for_reproducible_build/ Debian’s Archive Is Up To 94% For Reproducible Build

So, what are the main types of non-reproducible packages in that 6%? I don’t see specifics from a skim of https://reproducible-builds.org or https://wiki.debian.org/ReproducibleBuilds , is it specific groups of packages that aren’t reliably reproducible or is it a global thing where packages just chronically have reproducibility problems every so often?

This seems to be the current list of unreproducible packages in the Debian unstable: https://tests.reproducible-builds.org/debian/unstable/index_dd-list.html

here’s a detailed article on an effort to make a large codebase reproducible http://blog.netbsd.org/tnf/entry/netbsd_fully_reproducible_builds

https://news.ycombinator.com/item?id=14834386%20:- Status update from the Reproducible Builds project (debian.org) 322 points by lamby on July 23, 2017 | hide | past | web | favorite | 86 comments :- “Guix and Nix are input-reproducible. Given the same input description (input being the source files and any dependencies) an output comes out. Builds are then looked up in a build cache based on the hash of al lathe combined inputs. However. The _output_ of Nix artifacts are not reproducible. Running the same input twice will yield a different result. https://www.reddit.com/r/NixOS/comments/2n926h/get_best_of_both_worlds_guix_vs_nixos/ Nix does some tricks to improve output reproducibility like building things in sandboxes with fixed time, and using tarballs without modification dates but output bit-by-bit reproducible is not their goal. They also don’t have the manpower for this. Currently, a build is built by a trusted builderver for which you have the public key. And you look up the built by input hash but have no way to check if the thing the builderver is serving is legit. It’s fully based on trust. However, with debian putting so much effort in reproducible output, Nix can benefit too. In the future, we would like to get rid of the ‘trust-based’ build servers and instead move to a consensus model. Say if 3 servers give the same output hash given an input hash, then we trust that download and avoid a compile from source. If you still don’t trust it, you can build from source yourself and check if the source is trustworthy. Currently, a build is built by a trusted builderver for which you have the public key. And you look up the built by input hash but have no way to check if the thing the builderver is serving is legit. It’s fully based on trust. However, with debian putting so much effort in reproducible output, Nix can benefit too. In the future, we would like to get rid of the ‘trust-based’ build servers and instead move to a consensus model. Say if 3 servers give the same output hash given an input hash, then we trust that download and avoid a compile from source. If you still don’t trust it, you can build from source yourself and check if the source is trustworthy.”

For and against argument for reproducible builds:- https://news.ycombinator.com/item?id=13690703 NetBSD fully reproducible builds (netbsd.org)

“aseipp on Feb 21, 2017 [-]

NetBSD can build the whole OS from source tree to distribution media with a single command.

This one, at least, can be done in NixOS/Guix once you check out the source – and the Nix package manager can technically be installed on any Linux distro, too (and some other ports to Cygwin/FreeBSD/Mac etc) and run a single command to get the ISO, or any kind of build product you want. The carefully tested and maintained portability/cross-compilation is another thing though: NetBSD has fantastic support here that is not easily replicated without just doing a ton of work. So its universal basically-always-works cross compilation, everywhere – is rather unique here. You can’t build NixOS ISOs natively e.g. Nix-on-Darwin, which is rather unfortunate. ”

https://www.schneier.com/blog/archives/2006/01/countering_trus.html In 2006, Bruce Schneier blogged a pretty good breakdown of a paper by David A. Wheeler https://dwheeler.com/trusting-trust/ on defending against Thompson’s specific example. The paper itself is still paywalled as of this time, to the best of my knowledge. The Wheeler paper is very interesting, but it is focused on the bowels of compiler design, and this question seems more focused on end-user precautions than compiler design or even systems programming. There are generally two ways we understand the risk involved with compiling a specific piece of code:

1. Authenticating the code as a true, untampered-with piece of code written by someone whom we have chosen to trust. 2. Closely examining the content of the code itself, and thoroughly understanding what it does.

The second case–a thorough code audit–is a huge, long, resource-intensive task. It almost never really happens for codebases of nontrivial size, because it is simply too costly. Much more often, we are looking at the first case: trusting the coder, and validating that the code hasn’t been tampered with between the coder and the consumer. Of course, in the end, as others have pointed out, these integrity checks only do one any good if the developers’ infrastructure hasn’t been compromised, if the developer was coding well, and so on.

http://entrop-x.com/index.php/en/cyber-security/40-verifying-the-integrity-of-a-compiler

https://itsfoss.com/install-software-from-source-code/

http://entrop-x.com/index.php/en/cyber-security/40-verifying-the-integrity-of-a-compiler

https://softwareengineering.stackexchange.com/questions/184874/is-ken-thompsons-compiler-hack-still-a-threat

https://askubuntu.com/questions/28372/how-do-i-get-and-modify-the-source-code-of-packages-installed-through-apt-get

http://www.linux-mag.com/id/976/ compiling and linking

https://www.linuxquestions.org/linux/answers/Programming/Building_C_programs_on_Linux

3D printer is finished and operational.

The 3D printer which the Makerspace bought as a kit, had suffered intermittent problems with the drive belts.

The problem has now been resolved. The repair involved replacing the drive wheels with wheels which have two locking screws instead of one screw. Then realigning all the belts and the drive wheels to enable free movement of the print head.

Hello from Poland

IMG_20180908_101060

Blackpool Makerspace received an invitation from Poland to attend a financial services conference to be held in London on Thursday 4th October.

The invite generated much discussion about the media the invite arrived on. A 3.5 inch floppy disk!  One of our younger members, Josh, had never seen a real floppy disk before.

Some suggested the floppy was a stunt to make the invite stand out and become a talking point, as only geeks would have the ability or hardware to read a floppy.

On a separate note,  Dan Lynch from Liverpool Makerspace came for a visit today before going with Les to the Blackpool Raspberry Jam birthday party. Four years and still going strong.

Energy Self-Sufficiency in an Urban Environment

2018/8/18 meeting

Some of the members of Blackpool Makerspace have been considering undertaking one or more energy self-sufficiency based projects with reference to a Blackpool Town-House.

Water recycling, Solar hot water generation, Solar electric generation, wind generation of electricity, other types of generators which can or could use a range of different fuels.

Also considered as possible projects: How to store the energy for later use, using batteries, water tanks, piles of bricks and other yet to be thought of ideas. How to minimise energy use, and how to conserve energy.

Food production is also being considered. Compact growing systems for confined spaces, including hydroponic and Aquaponic.

While looking for projects, it became apparent that there are numerous old patents and designs which have been sidelined or forgotten about in the name of progress. There was also a suspicion that big business buy up patents if the patents in question may pose a threat to their business model.

Practicality, cost, and return on investment will be considered for each project under consideration. For example, it was pointed out that the cost of the scaffolding alone would be over £1000 before any type of solar panels could be put on the Makerspace roof. With wind turbines, there is legislation regarding hight and size to be considered. There is a lot of conflicting information relating to solar and wind generation, with the marketing people tending to push the most expensive option as opposed to the most efficient, or best value for money option.

Energy companies in the UK are pushing (marketing) smart meters as an energy awareness and possibly energy saving option. But it has been reported in the press that the meter might cease to be smart and need replacing when you switch to a new supplier. It is also reported that the meter may not save the user any money at all. Given that switching supplier is becoming more common, needing a new meter every time you switch looks like a huge waste of money and resources.  http://www.thisismoney.co.uk/money/bills/article-4912922/Smart-device-replaced-swap.html

There are numerous free and open source methods of energy monitoring which may be more appropriate and waste less resources. The Open energy monitoring project: https://openenergymonitor.org/

More details of our progress will appear as we continue to investigate the options.