On 07/13/19 17:50, Molly Miller wrote:
I have a couple of things I'd like to say in response to both the
original post and some of the points raised in the replies by Kiyoshi
and Luis. I've been using Scaleway's virtualised services for nearly two
years now (first with a pair of small x86_64 machines for personal use,
and more recently an aarch64 machine for Adélie development work), so I
have some operational experience which I'd like to share, and I also
broadly agree with the points regarding shared tenancy of physical
Please excuse the thread-breaking, but I wanted to make sure this gets
copied to adelie-infra properly. I'm also a bit over-caffeinated as I'm
writing this, so apologies if it gets a bit rambly or hard to follow.
Thank you for the in-depth response, and sharing your operational
First, the OP:
On 2019-07-13 10:56, A. Wilcox wrote:
> Scaleway offers a similar level of reliability, and has a higher level
> of availability based on our current account with them. They
> additionally offer servers that are not based on the x86 architecture,
> so we are still protected from the numerous issues that plague x86.
Scaleway's business focus has shifted over the past year or so; they are
no longer pushing ARM cloud services anywhere near as much as they used
to, and now seem to be more interested in providing managed services in
a similar vein to AWS. As such, there's limited capacity on their ARM
cloud (which I believe they are no longer expanding), which could
potentially cause issues in securing the resources necessary for a
The only limited capacity I saw was on the really big systems (128/256
GB RAM), but I'll admit I don't know what capacity might look like in
the future if we needed to expand.
> The network has never suffered any outages, either. Since the
> cloud features ARM servers, we would additionally still be able to avoid
> the x86 architecture and all of its failings.
I can't recall any particular dates, but from offhand experience
Scaleway's network (at least the virtual machine estate in Amsterdam)
has *definitely* suffered availability issues. (I can't speak for
Scaleway's Paris network, which I'm assuming is more substantial.) I
must stress, however, that these issues are nowhere near as bad as
Integricloud's -- just don't expect perfection, because you'll be
disappointed. For what it's worth, in my experience any of the brief
network issues have disrupted IPv6 connectivity more than they have
disrupted IPv4 connectivity.
It's a lot better, at this point, to have hiccups on v6 than total
outages. Integricloud's total outages have killed our productivity.
> We have continually been limited by our lack of IPv4 space at
> Integricloud. Currently, we "proxy" every server via athdheise, a
> virtual server on our Integricloud dedicated system that has both an
> IPv4 and IPv6 address.
(This is an aside, but I've worked in an environment which has
successfully operated services from an IPv6-only network, with a
dual-stacked reverse proxy at the network border to handle connections
from IPv4-only clients. The border gateway ran Haproxy, which is capable
of selecting backends based on server name indication in TLS handshakes;
as the SNI is sent before any key exchange is performed, the gateway
machine did not need access to any private key material, and could be
used for any protocol which runs over TLS and uses SNI.)
I hate this networking setup so much. Everything should just be native
(or native-ish). It's needless complexity. It irritates me.
> If we use Scaleway virtual servers, every system gets its own
> IPv4 address, which drastically simplifies our administration.
Scaleway's network configuration is weird for virtual machines -- I'll
get to that in my operational experience spiel in a bit.
> Additionally, we would receive a lot more RAM per virtual server.
More RAM is always better -- the RAM which our Integricloud machines
currently have is eye-wateringly small.
> Finally, we would save a dramatic amount of money. We currently pay
> 225$/mo pre-tax for Integricloud.
Saving money is also good.
> The current systems we run on Integricloud are:
I agree strongly with Kiyoshi here -- though I'm not so keen on having
personal resources under the adelielinux.org
banner, I won't object if
they're made available for use by other contributors.
Refer to my response there.
I have some points to add to what Luis said:
On 2019-07-13 16:58, Luis Ressel wrote:
> I strongly agree with Aerdan here. In my opinion, the risks of moving to
> VPSes on hardware shared with other tenants outweight all (perceived or
> real) benefits of using aarch64 instead of x86.
The place I mentioned above with the IPv4-to-IPv6 gateway also provided
virtualised hosting services, and I interacted with their systems for
provisioning and managing customer VM's on a number of occasions. It's
*really easy* for the party which controls the host to reboot a guest
into a rescue environment which the host controls, and then mount and
read from the root filesystem. The next time I need to provision a
server on shared hardware, it's going to have an encrypted root
filesystem -- dedicated hardware would be even better, but at the very
least an encrypted root filesystem raises the bar from "mounting the
guest disk image" to "dumping a snapshot of the guest memory and then
extracting the encryption keys".
I wouldn't use a VPS without an encrypted root FS. It adds some Special
Circumstances when the system has to be rebooted, but it's very much
> However, I am in favour of migrating away from Integricloud
> of the destination to which we'd migrate, be it aarch64 vpses, our
> already existing x86 infra, colo'ed x86 or ppc servers, or a cluster of
> raspis in someone's basement.
(I don't know how serious Luis is with the remark about a Raspberry Pi
cluster, but provided someone has a basement with redundant power and
network, I'd say that this is actually workable. The Raspberry Pi
Foundation's website has been served off a cluster of Raspberry Pi's for
special events several times in the past few years, most recently the Pi
4 launch, so this isn't a new idea.)
Please don't make me run Adélie in my office. Imagine the downtime with
the tornadoes next spring! :(
Okay, some bits and pieces of operational experience from using Scaleway
for a while:
The network configuration on their virtual machines is *weird*. Every
virtual server gets an address in 10.0.0.0/8 space, which is
point-to-point linked to a device on the host (the netmask on the
address issued by the on-link DHCP server is /31). Scaleway's border
routers then perform bidirectional NAT to and from the public IPv4
address allocated to the server. While this limits each VM to a single
public IPv4 address, it doesn't really cause any operational problems
that I've seen, and it means that IPv4 addresses can be dynamically
moved between VM's. This also means that you can reach other VM's within
Scaleway's network from a server without a public IPv4 address assigned.
I kind of like the idea of "elastic IPs", because it means we could
bring up a temporary VPS with the same public IP if something went down
or crashed or needed maintenance.
The IPv6 setup is also strange. Similar to the IPv4 story, you get a
single public IPv6 address which is point-to-point linked to a device on
the host machine (the address is pulled in via cloud-init at boot time,
and has a netmask of /127). My memory is a bit fuzzy around the details
of this next bit, but if you turn the VM off in such a way that results
in it being archived (I think), then you get allocated a *different*
IPv6 address the next time you turn it back on, and then have to run
around to update DNS records for the new address.
I've used both Scaleway's x86_64 and ARMv8 VM's, and *in my experience*
(I stress this heavily), the ARMv8 machine I've most recently had (16
cores and 16GB of RAM, for build work) has been quite a bit more special
than the x86_64 ones. (I've had others in the past, but I can't remember
if any were as annoying as this one.) I've had issues getting the VM to
boot and reboot properly from the control panel, with the control panel
losing sync with reality, or getting stuck in "starting up" or "shutting
down" states for over half an hour at a time. Rebooting from within the
VM itself fairly reliably triggers an (emulated?) hardware fault to do
with IRQ exceptions just as the kernel exits, and getting the VM out of
this state regularly triggers an IPv6 address change as described above.
I haven't ever seen any of these issues on the x86_64 VM's, which are
very well behaved in these regards, and it's entirely possible that it's
a problem with the particular hypervisor machine which my VM's running
on. Nonetheless, I'd strongly recommend carefully evaluating Scaleway's
ARMv8 services before committing to them.
As I noted, I think we can start by just migrating the wiki and seeing
how we like it. If we do end up having issues caused by Scaleway's
infrastructure, we don't have to commit.
This will probably be a deal-breaker to some, but (outside of
exceptional circumstances) you do not control the kernel on the ARMv8
VM's. The emulated firmware is configured to boot over the network,
downloading a kernel and initramfs provided by Scaleway, which performs
some early-boot tasks like mounting the root filesystem and possibly
downloading a kernel module tree onto the root filesystem (I can't
remember where in the boot process this occurs), and then
switch_root()'s into the root filesystem and starts init. This used to
be the case for x86_64 VM's, but they introduced an option to boot from
an EFI system partition on the VM's root disk some time last year --
this feature has not been ported to their ARMv8 cloud as of yet. I
briefly tried to hack something up with kexec() to try and chainload a
guest-controlled kernel, but this was unsuccessful. You also *might* be
able to interrupt the emulated firmware before it boots from the
network, and manually direct it to boot from disk or load a boot image
from elsewhere (which is a trick I've seen used to get other OS's like
OpenBSD running on Scaleway's x86_64 VM's before the local boot option
came along), but I haven't tested this at all, and requires manual
intervention very early in the boot process.
Actually, over here I was able to choose "custom boot" for ARM VMs just
like you describe for the x86_64 ones. I don't know if that is the case
in the Amsterdam ones, because all the Scaleway VMs I have are in Paris.
To recap, I think three solutions to our infrastructure needs have been
proposed so far:
- 1. Move to Scaleway's ARMv8 cloud.
- 2. Move to our dedicated x86_64 machines.
- 3. Move to a cluster of Raspberry Pi's (yes, I'm quite serious about
The only thing I think I can add to this list is provisioning another
ARMv8 machine at Packet.net
and configuring it as a VM host, in a
similar manner to corgibutt, which currently hosts code.foxkit.us.
However, as arw indicated in the OP, this is prohibitively expensive.
#2 is out per my other response. #3 is please, please out.
It is looking more and more like #1 is out, which may leave us paying
the majority of our income to Packet.net
This is a rock-and-a-hard-place problem, and I really don't have
good solutions in mind. On one hand, I'm very sympathetic to the desire
to keep our infrastructure off x86_64 hardware, but at the same time I
think that running everything in VM's on shared hardware has some
security and privacy implications (and then there are the practical
considerations for using Scaleway's ARMv8 cloud).
I suppose we are ignoring the fact that we can just continue using
It almost looks like that's where we're heading, too. :(
A. Wilcox (awilfox)
Project Lead, Adélie Linux