Conversation with #inferno at Thu Jul 23 13:18:03 2015 on powerman@chat.freenode.net (irc) (13:18:03) #inferno: Topic for #inferno set by anth at 19:45:51 on 04/07/15 (14:39:23) bjorkintosh left the room (quit: Ping timeout: 272 seconds). (14:40:28) bjorkintosh [~bjork@ip68-13-224-81.ok.ok.cox.net] entered the room. (17:06:54) Fish- left the room (quit: Ping timeout: 260 seconds). (17:09:14) Fish- [~Fish@89.225.240.194] entered the room. (18:45:40) SCHAAP137 left the room (quit: Remote host closed the connection). (19:24:32) Fish- left the room (quit: Quit: WeeChat 1.2). (19:25:25) Fish- [~Fish@89.225.240.194] entered the room. (19:53:34) MrVandelay [~nox@176.10.248.198] entered the room. (19:54:21) MrVandelay: Hey. I've got a 6-node ARMv7 cluster with 32 cores (+6 OpenCL capable GPU's) (19:54:47) MrVandelay: I'm looking into running Inferno on it as it seems to be the only SSI cluster software supporting ARM, that is actively maintained, that I could find (19:54:54) MrVandelay: Could someone clarify some things for me? (19:55:21) MrVandelay: sorry, 48 cores, not 32. (20:04:27) anth_x: MrVandelay: i would not normally describe inferno as "SSI cluster software", but what are your questions? (20:07:21) MrVandelay: anth_x: Exactly. That's what's confusing me. It seems to be something other than both OpenMPI and SSI (20:07:43) anth_x: yes, it certainly is. (20:07:53) MrVandelay: What I'm looking for with an SSI system would be that the clustering is transparent to the applications running on it (20:08:19) MrVandelay: As, I've understood, it is in MOSIX (20:08:45) MrVandelay: Inferno, if I understand it correctly, simply presents all resources as a distributed filesystem with no distinction between local and remote resources (20:09:09) MrVandelay: But you would still have to code programs to specifically take advantage of that, unless there is some sort of built-in scheduler? (20:10:09) anth_x: your programs don't need to do anything to take advantage of the resource distribution (they're just accessing files, which might by local, remote, synthetic, whatever). (20:10:36) MrVandelay: Well, if I have 48 cores spread out over 6 nodes (20:10:39) anth_x: but it sounds to me like you want that set of distributed resources to include actual computation; inferno doesn't do anything for that automatically. (20:11:13) MrVandelay: I would like to distribute processes between those 48 cores without having to specifically code for it (20:11:46) anth_x: i don't think you're going to get that from inferno. process distribution is not automatic. (20:12:04) MrVandelay: But, each core would show up as a local core for the operating system right? (20:12:27) anth_x: (it's worth noting, though, that the other resource distribution makes a lot of it much easier, though, as you can have the same *other* resources on each system image) (20:12:50) anth_x: i think not in the way you want. (20:13:08) MrVandelay: Yeah I think so too, but could you please try to make me understand it at the very least? :-) (20:13:11) anth_x: when you launch a process, there's no automatic way to have computation happen on another node. (20:13:24) MrVandelay: Right. So there's no scheduler so to speak? (20:13:38) MrVandelay: But I could manually execute a process on another nodes core? (20:13:53) anth_x: well, there's a scheduler like in pretty much every OS, but it not in the way cluster tools mean it. (20:13:59) anth_x: correct. (20:14:11) MrVandelay: Yeah I meant a cluster scheduler. (20:14:13) MrVandelay: Hmm. (20:14:16) anth_x: there's been several projects doing that on very large scale, using plan9 and inferno. (20:15:00) MrVandelay: I'm starting to still see some advantages over the MPICH method though (20:15:21) MrVandelay: For example. My nodes each have 8 cores running Heterogeneous Multi Processing (20:15:22) anth_x: oh, i certainly think so. (20:15:44) MrVandelay: So the kernel scheduler (at least in linux) load balances processes between the cores automatically for each one node (20:17:19) MrVandelay: This is still so very confusing to me. Could you please tell me more about it? (20:17:36) MrVandelay: Would Inferno running natively support HMP on my nodes for example? (20:17:58) MrVandelay: Or would I have to run Inferno on top of Linux? (20:17:59) anth_x: to the best of my knowledge, all the existing inferno implementations are SMP. (20:18:16) anth_x: (again: within a node) (20:18:41) anth_x: your best bet is to read some of the papers. let me see if i can find a pointer to a good one... (20:19:23) MrVandelay: That would be nice, I've been trying to wrap my head around the Plan 9 implementation (which I've gathered is very similar) but it's so different from anything that I know or have used that it's a bit confusing (20:20:14) anth_x: the ideas, as far as distributed computing go, are pretty much the same. (20:20:45) MrVandelay: Well I can't say that I'm experienced in that area either (20:21:07) MrVandelay: I have some experience in running the nodes and MPICH programming (20:21:47) MrVandelay: I'm not going away from it, it's rather that I'd love to try these different approaches (20:24:00) MrVandelay: So, hmm. Inferno can run on top of Linux, correct? And since the Linux kernel supports HMP and can see all 8 cores of each node (20:24:20) MrVandelay: Does that mean that Inferno will see and be able to utilize each core if run ontop of Linux? (20:36:33) anth_x: MrVandelay: sorry, got distracted. I'm not sure how well the hosted environment would deal with HMP. never looked. (20:36:47) anth_x: i'd suggest asking the mailing this, if you've got a specific environment in mind. (20:36:53) MrVandelay: anth_x: That's fine. I'm just happy to get some clarification :-) (20:37:12) MrVandelay: Well, with the HMP active in Linux the cores just appear as individual cores (20:37:28) MrVandelay: As if it was one 8-core processor instead of two 4-core processors (20:37:32) anth_x: the HMP thing aside, yes: i strongly suspect the emulated environment is much more common. (20:37:58) MrVandelay: I figure that since the HMP is transparent to the userland in Linux, then it should be transparent to Inferno running in Linux as well? (20:38:11) anth_x: if that's the case, than yes. (20:38:28) MrVandelay: So it makes more sense for me to run Inferno under Linux than to run it natively then (20:40:32) anth_x: in most cases. all the big cluster projects i know of that have used inferno have used it hosted on top of plan9 or linux. (20:40:46) MrVandelay: The nodes are running Samsung Exynos 5's. They include a low performance/energy efficient 1.7GHz Cortex A7 quad-core processor and a high-performance/energy inefficient 2GHz quad-core processor. By default, the architecture intended to switch between the two processors using only one quad core at a time depending on the load. So if the device was idling, it would use the more energy efficient processor, and (20:40:48) MrVandelay: when computing power was needed it would switch over to the fast processor so as to maximize energy/performance (20:41:24) MrVandelay: The Linux kernel disables that hardware scheduler in favour of taking over itself, enabling both cpu's at once. If set as such. (20:42:00) MrVandelay: Interesting. I thought that Inferno was itself a planned successor to Plan 9. Why would one host Inferno on top of Plan 9? (20:42:28) anth_x: it's not really a successor, more a cousin. (20:42:41) MrVandelay: But both are complete operating systems aren't they? (20:42:47) anth_x: yes. (20:42:55) MrVandelay: What's the benefit of running Inferno on top of Plan 9? (20:43:34) anth_x: you can run inferno everywhere. so, for example, if you've got a giant cluster using plan9 and a job dispatch system using inferno, your control node can run whatever the hell you want, like some mac or windows desktop. (20:44:12) MrVandelay: Oh. But I thought you could run Plan 9 everywhere as well? I mean, you can run plan 9 inside Linux right? (20:44:41) anth_x: not directly. i mean, it runs in qemu or vmware or whatever, but it doesn't really have a "hosted" mode like inferno does. (20:44:42) MrVandelay: Also, I'm willing to read anything you link. If you've got some good material on hand (20:44:47) MrVandelay: Oh. (20:44:53) anth_x: (there's 9vx, but that's "special") (20:45:00) MrVandelay: Does plan 9 offer something Inferno doesn't? (20:45:17) anth_x: the big difference is language/VM. (20:45:59) anth_x: plan9 is "traditional", in that it runs compiled machine code (typically C code). in inferno, everything's written in limbo, running in a VM. (20:46:19) anth_x: (you can sort of "cheat" in inferno using kernel modules or the like) (20:48:53) MrVandelay: Hmm. So Inferno's language is more like Java (20:49:43) anth_x: well, the language itself isn't really anything like java, but it's similar in that you write in a given language that's turned into a bytecode for a particular VM, that bytecode being portable. (20:49:55) anth_x: limbo is *much* nicer. much more C-like. (20:50:03) MrVandelay: Yeah, I meant it explains the portability (20:50:14) MrVandelay: and makes me understand why people would run Plan 9 and Inferno (20:50:22) anth_x: limbo was a big inspiration for google's new language "go". (20:50:33) MrVandelay: Ah. I'm not at all familiar with it (20:50:52) MrVandelay: I program in C since I'm usually running *nix and most things are written in C (20:51:07) MrVandelay: I don't mind learning other languages, though. (20:51:28) anth_x: yeah. that's why most of the big projects have done their big computing in plan9 and used inferno for command/control. (20:51:30) MrVandelay: But it seems, for the purpose of distribution, that inferno's approach of using a VM is essential unless all your nodes are the same arch right? (20:51:56) anth_x: depends how you're doing things. (20:52:15) MrVandelay: Ok. Let me re-state some assumptions I've made (20:52:37) anth_x: plan9 makes cross-compilation super easy, and since we're not doing process migration anyway, it's easy to just call the same named binary elsewhere. (20:52:41) MrVandelay: Inferno isn't as I thought, it doesn't transparently distribute programs across nodes or cpu's (20:52:52) anth_x: in any case, "/bin/cat" will be the cat binary appropriate for your architecture. (20:53:09) MrVandelay: But in Inferno I could easily execute a program/process on any processor in any node connected to it, right? (20:53:13) anth_x: correct. nothing in the plan9/inferno world does process migration or the like. (20:53:37) MrVandelay: Like, manually starting an application on core x (20:53:47) anth_x: you don't address processors specifically, at least from the application level. but you can dispatch to a node. (20:53:54) MrVandelay: Ah. (20:54:16) MrVandelay: Can you do that on a userland-level? (20:54:24) anth_x: NIX is a plan9 derivative that initially was looking at addressing cores directly, but that part of it's been mostly abandoned. (20:54:35) anth_x: ("nix" now mostly just refers to the 64-bit plan9 kernel) (20:54:44) MrVandelay: Oh. (20:55:12) MrVandelay: This is incredibly interesting (20:55:28) anth_x: good. :-) (20:56:00) anth_x: i dont' have a paper handy to point you at, but you might read up on "xcpu", "hare", and plan9 on the blue gene. (20:56:01) MrVandelay: So, you've still got terminals right? What would be the command to start a process on node x? (20:56:34) MrVandelay: I have the plan 9 white sheet linked. I read through as much as I could muster before becoming overwhelmed (doesn't happen very often, but it's so different from any concepts I'm used to.) (20:58:18) anth_x: 'cpu' or 'rx'. (20:58:35) anth_x: e.g., from the cpu(1) man page: (20:58:37) anth_x: eqn paper | rx kremvax troff -ms | rx deepthought lp (20:59:04) anth_x: does eqn locally, pipes the output off to host kremvax for further processing, then prints the results from deepthought. (20:59:27) MrVandelay: That's awesome! (20:59:36) anth_x: (cpu is more common for interactive use, and gives you access to the calling system under /mnt/term) (21:00:02) MrVandelay: And I assume, since all the resources on all nodes are accessible as a filesystem to all nodes the actual binaries don't have to even exist on the node you're going to run it on? (21:00:23) anth_x: you can do that, although it's less standard. (21:00:30) MrVandelay: Because it'd be slower? (21:00:31) anth_x: xcpu (or xcpu2, i forget which) did that. (21:01:12) anth_x: slower sometimes; i think mostly because you don't need to bother. but you can certainly tell cpu to run /mnt/term/bin/whatever, and it'll run /bin/whatever off the calling system on the target's processor. (21:01:35) anth_x: (and then it's up to you to make sure that's a sensibe thing to do, like the right arch and whatnot) (21:03:37) MrVandelay: Right, and the latency (21:04:13) MrVandelay: Since this cluster will be comprised of identical hardware. Could it run from the same filesystem on a NAS or would it have to have filesystems? (21:04:25) MrVandelay: (getting into SSI territory) (21:05:09) anth_x: no, it's typically for plan9 machines (cpu server & terminals, regardless of architecture) to all boot of the same file system. (21:05:30) anth_x: if your systems can PXE boot, you don't need to have any local storage at all. (21:06:20) MrVandelay: So I just have to put Inferno on one drive and then all 6 nodes can access all those files at the same time without time access issues? (21:06:52) MrVandelay: Although I suppose this is futile since Linux, which I'm going to run Inferno under, can't handle that (21:07:21) MrVandelay: Although I suppose I could have a minimal linux core and then have the Inferno userland on the NAS shared between the nodes (21:07:56) anth_x: if you're doing hosted inferno, you'll need to have some local storage, but it can be quite minimal. each inferno could import the "real" file system from a designated host. (21:08:40) MrVandelay: Hmm. (21:08:55) MrVandelay: Each node has 32gb MMC onboard (21:09:16) MrVandelay: Master-node has a 250gig SSD-drive (21:09:33) MrVandelay: The hardware supports tftpboot or booting from nfs (21:09:46) MrVandelay: But my concern is rather that Linux does not like to share filesystems with multiple systems (21:09:59) MrVandelay: due to access time issues (21:10:46) MrVandelay: I figure that I'm going to have the linux distro (stripped Arch Linux) on each nodes MMC which will be completely loaded onto RAM when the node is booted up (21:10:54) anth_x: but that's fine. if you have your "real" inferno file system, with whatever you want to run, on the head node, the inferno on all the workers could import it, without linux being any the wiser. (21:11:08) MrVandelay: then all writes and reads will occur to the NAS SSD-drive instead of the MMC (21:11:28) MrVandelay: Yeah, I figure there is some separation between Inferno even on Linux (21:11:48) anth_x: typically. you can cross-mount if you like, but you don't have to. (21:12:01) MrVandelay: What do you mean? (21:12:31) MrVandelay: I'm just looking for the fastest way to avoid using the local MMC drives on the nodes (since MMC's are very slow.) (21:12:36) anth_x: i mean, if you import something within inferno, it won't normally show up in linux, but you can make it do so, if you like (there's a 9p mount driver for linux) (21:12:57) MrVandelay: Oh. Right. (21:13:21) anth_x: yeah. so on your MMC system image, you'd stick a minimal inferno installation (the binary and a few support/config files), then have that mount the "real" fs from the inferno on the head node. (21:13:45) MrVandelay: Yeah but I'd have to run Inferno under Linux (21:13:55) MrVandelay: In order to get access to all 8 cores of each node (21:14:03) MrVandelay: If Inferno doesn't support HMP (21:14:18) anth_x: probably. i'm less sure about that. (21:14:35) MrVandelay: I'm very confused myself :-) (21:14:47) anth_x: the HMP thing is a good question for the list. (21:14:56) MrVandelay: When Inferno runs under Linux, is that the entire operating system running in a chrooted environment or just userland tools? (21:15:10) anth_x: the rest of what you're trying to do is, i think, pretty straight-forward in the inferno world, once you give up on SSI and process migration. (21:15:26) MrVandelay: Yeah, I've understood that this is not SSI (21:15:33) MrVandelay: I'm still intrigued by SSI (MOSIX, for example.) (21:15:37) MrVandelay: But I understand that this isn't it (21:15:48) MrVandelay: I still can see some very powerful uses for this, however. And I'd like to try it (21:15:59) anth_x: i'm not sure what you mean by chroot. it's not an actual chroot, but processes within inferno are insulated from the rest of the environment. (21:16:10) anth_x: depends what you mean, i guess. (21:16:26) MrVandelay: My spontaneous reaction about having everything part of the filesystem is basically "Why isn't *nix like this already?" (21:16:54) MrVandelay: Linux seems to have adopted something similar lately with devfs and such. (21:17:34) MrVandelay: So Inferno runs as a process in an isolated environment in Linux? (21:17:36) anth_x: unix got stuck. plan9 was the original unix guys saying "okay, good ideas, let's start over". inferno continued the plan9 ideas. (21:17:41) anth_x: correct. (21:18:00) anth_x: well, to clarify: it isolates itself. it's just a normal user process as far as linux is concerned. (21:18:03) MrVandelay: That's pretty cool. Then I can use it at the same time as MPICH (21:18:08) MrVandelay: Cool. (21:18:32) MrVandelay: So, in order to access Inferno does one invoke a process that puts you inside some sort of chrooted inferno userland? (21:18:43) anth_x: sort of, yeah. (21:18:49) MrVandelay: That's really cool. (21:19:00) anth_x: that part's super easy to play with. just grab inferno and do a local-only installation. (21:19:16) anth_x: what you see in that case will be the same sort of isolation you'd get in the cluster case you're talking about. (21:20:02) MrVandelay: I run very minimalistic linux. I just use Arch Linux with a lot of things uninstalled. My workstation is just cwm (9wm fork) and aside from that I only have two X apps.. chromium and urxvt (21:20:42) MrVandelay: I love cwm/9wm. On my cluster I don't run X so it's going to be very, very small. It shouldn't (I hope) affect Infernos performance too much. (21:20:58) MrVandelay: That's cool. (21:21:31) MrVandelay: Does the inferno process have its own filesystem in an image or something? Does it use its own partition? (21:22:07) anth_x: typical use is that it uses the host file system, so /usr/inferno in linux becomes / in inferno. (21:23:03) anth_x: but you have many options there. it's been many years, but i've built single-binary infernos that have everything built in enough to mount a remote file server. (21:23:04) MrVandelay: Ah. So it's chrooted there. And I suppose that any files inside /usr/inferno just looks like blank files rather than FIFOs outside of the Inferno process? (21:23:50) anth_x: if you make a normal file inside inferno, it shows up as a normal file in linux. (21:24:06) MrVandelay: I was mostly thinking about the filesystem representation of resources (21:24:22) anth_x: so you can edit files on your host OS if you like. acme-sac was (is?) a project to bundle the acme editor as a stand-alone application for exactly that purpose. (21:24:53) anth_x: but if, inside inferno, you import a remote file system, it doesn't show up at all in linux (again, unless you do something special to make it) (21:25:25) MrVandelay: I mean, if Linux (outside of the inferno process) sees a cpu resource fifo (is that the correct term?) then that might be problematic for some things (21:25:41) MrVandelay: Interesting. (21:25:49) anth_x: there's no fifo. if you import /n/remoteserver in inferno, linux won't see that at all. (21:26:17) MrVandelay: I meant, what do you call the files that represent the networked resources in Inferno/Plan 9? (21:26:24) anth_x: files. ;-) (21:26:25) MrVandelay: I could only think of fifo (21:26:31) MrVandelay: Damn you and your simplicity ;p (21:26:43) MrVandelay: Hmm. (21:27:05) MrVandelay: So the smartest thing to do, if multiple simultaneous inferno processes can utilize the same inferno userland would be to simply place the inferno userland on the NAS (21:27:11) MrVandelay: have the inferno processes installed locally on each node (21:27:23) MrVandelay: and have inferno work with the NAS-userland on all nodes (21:28:37) MrVandelay: The actual linux distro + inferno after booting would be in RAM on all nodes which would have good latency and read/write speeds. The NAS would have higher latency but I wouldn't have to sync files between the nodes (21:30:27) anth_x: you can do that. if it's a real NAS, it's likely serving SAMBA or NFS or something, in which case you'll have to mount it in linux first. personally, i'd want to instead mount it over styx/9p, but that relies on being able to get your NAS to serve 9p, which may or may not be an option, depending on your setup. (21:31:36) MrVandelay: The NAS is a SATA-SSD connected directly to the master node (21:31:40) MrVandelay: So I can have it run whatever I want (21:31:52) anth_x: oh, yeah, good. so i'd run inferno there and serve over 9p. (21:32:03) MrVandelay: I didn't want the added latency of having some consumer NAS (21:33:07) MrVandelay: Basically: I'm using 6xARMv7 nodes (8 cores each) all identical except the master node, node 1 has the SSD-drive (21:33:51) MrVandelay: All drives also have MMC-modules for non-volatile local storage (21:34:10) MrVandelay: All nodes are connected to a GigE switch (21:34:25) MrVandelay: 16gbps throughput non-blocking (21:35:39) MrVandelay: The nodes can pxeboot through pxelinux. The network cards themselves don't have PXE but I could put pxelinux on their local MMC's and have them all boot from the NAS (21:36:08) MrVandelay: I'm just not sure if Linux is capable of sharing a filesystem with other nodes. I've heard of time access problems (21:36:11) anth_x: if you're running inferno on top of linux, inferno doesn't care about the pxe thing. (21:36:31) MrVandelay: Right, but I was thinking if I'd boot the entire system (including Linux) from the NAS (21:36:57) anth_x: oh. i don't know nearly enough about linux to be helpful there. (21:37:11) MrVandelay: I mean, each node would have to have a different hostname, a different IP, etc. (21:37:23) anth_x: yeah, i get what you're describing. (21:37:37) MrVandelay: With the same userland they wouldn't. Also, if they access files independently of each other the journaling of the filesystem on each local node would get fucked up (21:37:46) MrVandelay: I suppose Inferno, being built for that, doesn't have those problems (21:38:01) MrVandelay: I mean. What happens if two inferno nodes edit the same file at the same time? (21:38:53) MrVandelay: I could theoretically pxeboot a unique linux userland for each node from the NAS but I'm not sure if there's a point to that. (21:39:02) anth_x: in the normal case, they'll just both edit it. whoever writes last wins. (21:39:25) anth_x: but 9p has an "exclusive access" bit you can set on files to restrict it to only one open at a time. (21:39:26) MrVandelay: It seems easier to just keep the entire, fairly small, distro in RAM from the MMC (they aren't going to reboot) and then just have userfiles mounted on the NAS (21:39:32) MrVandelay: Ah. (21:39:42) MrVandelay: But since I'm not going to run Plan 9, I should use Styx and not 9p right? (21:40:06) anth_x: the underlying protocol is the same, these days. (21:40:11) anth_x: they used to be different. (21:40:13) MrVandelay: Oh. (21:40:24) MrVandelay: Linux has 9p2000 support built-in iirc (21:40:30) MrVandelay: whatever that means. (21:40:32) anth_x: but now, "styx" really just means "9p, optionally using certificate-base authentication" (21:40:55) MrVandelay: Oh, right. These protocols are encrypted. (21:41:02) anth_x: 9p2000 is the current version. both plan9 and inferno use that. (21:41:38) anth_x: the authentication method isn't defined in the protocol; rather, the protocol defines a hook for authentication to use. (21:41:57) MrVandelay: Ah. So Linux supports it natively (21:42:29) MrVandelay: Hmm. What does 9p use for authentication if not certificates? (21:42:47) anth_x: plan9 uses a shared-secret system; styx uses a public key system. there has been some work towards interoperability, but i've not kept up with where it's at. if you don't use authentication (which is common on controlled networks), the protocls are identical. (21:43:39) MrVandelay: I honestly don't need cryptography between the nodes at all. Can you turn that off to decrease latency? (21:43:51) MrVandelay: They're going to be on an isolated subnet on an isolated LAN (21:43:57) anth_x: yeah (21:44:04) MrVandelay: Only the master node is going to have access to the internet through a firewall (21:44:12) MrVandelay: Ah. Sweet. (21:44:32) MrVandelay: Because I'm really concerned with the latency between the nodes as it is. GigE is between 100-300 micro seconds as I understand (21:45:03) MrVandelay: That's a huge bottle neck as far as processor communication goes (21:45:36) anth_x: yeah, but if you're not trying to do SSI, it typically matters a bit less (although it certainly can still be important) (21:45:45) MrVandelay: True (21:45:54) MrVandelay: Well, obviously I can't do SSI, with inferno (21:46:00) MrVandelay: Unless I write my own scheduler program for it (21:46:26) MrVandelay: I figure I'd use Inferno for its strengths rather than trying to morph it into something it's not (21:46:37) MrVandelay: and I'll create a MOSIX cluster for experimenting with SSI (21:46:47) MrVandelay: Damn shame MOSIX only supports x86-64 (21:48:38) MrVandelay: So. Since Inferno is multi-arch, and Limbo is portable. If I write an app on my ARM cluster and then connect an x86-64 machine (21:48:45) MrVandelay: Can I just run it transparently on that node? (21:51:28) anth_x: yes. (21:51:33) anth_x: and sparc, and mips, and... (21:51:39) MrVandelay: The more I wrap my head around Inferno the more cutting edge it seems. Which is very awkward for something that's so old. (21:51:45) MrVandelay: That's awesome. (21:52:08) MrVandelay: This must make administering a cluster so easy (21:52:20) anth_x: yeah. way less work to do. (21:52:57) MrVandelay: Well, I doubt I'll run it on any SPARC machines. I was happy to get out of Solaris and SPARC 10 years ago. (21:53:32) anth_x: i haven't turned my sparc on in years. i think the HD went at some point, but i don't even remember. (21:53:49) anth_x: i tinker around with the old inferno javastation port periodically. (21:54:01) MrVandelay: I hope I won't offend you if I say that I absolutely hated SunOS (21:54:19) MrVandelay: Their hardware was very well built though.. I loved the LOMLight-modules (21:54:30) MrVandelay: But it was pretty bad price/performance ratio (21:54:35) anth_x: the javastation port was native. no sun software involved. (21:54:54) MrVandelay: Oh. (21:55:08) MrVandelay: I've never heard of it, actually. (21:55:22) MrVandelay: Is Limbo, since it uses a VM, slow in comparison to C? (21:55:49) MrVandelay: Java has a lot of garbage collecting problems (21:58:38) anth_x: slower than C, certainly; faster than java. (21:58:49) MrVandelay: Hmm. (21:59:20) MrVandelay: So since I already code C, and am planning on running the cluster on only one arch. Should I rather go for Plan 9? (21:59:35) MrVandelay: and can I run Plan 9 under Linux the same way as Inferno? (22:00:45) bjorkintosh: MrVandelay, in a virtual environment. (22:00:55) bjorkintosh: not hosted, as it would be with inferno. (22:01:07) anth_x: i can't say what you *should* do, but it's worth considering running plan9 natively, if you can. that's what i'd do. (22:01:15) MrVandelay: So any speed-increases from using C would be negated by the virtual environment bjorkintosh? (22:01:16) anth_x: depends on what that existing C code is, though. (22:01:32) MrVandelay: Well, I figure I'd write the C code for the system (22:01:34) MrVandelay: or Limbo code (22:01:38) MrVandelay: I'm not going to run any existing code (22:02:06) MrVandelay: I'd love to run either Inferno or Plan 9 natively but the problematic thing is hardware support (22:02:25) MrVandelay: If the kernel doesn't support HMP then only half of the cores would show up (22:02:46) MrVandelay: I'd lose 24 cores (22:02:50) MrVandelay: That's pretty bad. (22:03:09) bjorkintosh: MrVandelay, what sort of hardware do you have? (22:03:42) anth_x: he's got a Samsung Exynos 5 (22:04:02) MrVandelay: bjorkintosh: 6 of these: http://www.hardkernel.com/main/products/prdt_info.php?g_code=G143452239825 (22:04:11) anth_x: there was a brief discussion on 9fans about a plan9 port. hasn't been touched in a while. (22:04:17) anth_x: you might follow up with the author and ask. (22:04:30) bjorkintosh: oh. (22:04:34) MrVandelay: Each node has an A15 quad-core 2GHz cpu and a 1.7GHz A7 quad-core cpu (22:04:41) bjorkintosh: so where're the 24 cores coming from? (22:04:52) MrVandelay: What do you mean? (22:05:00) bjorkintosh: I'd lose 24 cores (22:05:10) bjorkintosh: let me see. must have missed something. (22:05:17) MrVandelay: I've got 6x of those I showed you (22:05:23) MrVandelay: A total of 48 cores (22:05:30) MrVandelay: if both processors are activated at once (22:05:38) bjorkintosh: oh you're building a cluster! (22:05:43) MrVandelay: Oh, yeah, :) (22:05:47) bjorkintosh: nice! (22:05:50) MrVandelay: By the way. My last name is Bjorkegren (22:05:54) bjorkintosh: hahaha (22:05:59) bjorkintosh: green birch? (22:06:03) MrVandelay: Björkegren, technically, but I write it in Bjorkegren in English (22:06:08) bjorkintosh: or birch forest? (22:06:19) MrVandelay: Birch branch directly translated (22:06:23) MrVandelay: In Swedish (22:06:41) MrVandelay: Anyways, about the architecture. The Exynos 5 (22:06:43) bjorkintosh: oh i see. (22:06:59) MrVandelay: Is that it was designed with these two processors because: The Quad-core A15 2GHz is high performance, not energy efficient (22:07:11) MrVandelay: the Quad-core A7 is low performance, very energy efficient (22:07:30) MrVandelay: The idea is that the architecture would switch to the high performance processor only under heavy load (22:07:38) MrVandelay: and the low performance / energy efficient processor only under low load (22:07:55) MrVandelay: So as to get good performance vs energy efficiency ratio. Not to have both processors activated at once. (22:08:22) MrVandelay: In the Linux kernel you can disable the architectures ISK and use the linux kernel to schedule processes with both processors activated at once. = 8 cores instead of 4 (22:08:34) MrVandelay: I'm afraid I'd lose that if I run Plan 9 or Inferno natively. (22:09:23) MrVandelay: Hence "losing 24 cores" out of my 48 under linux (22:09:39) anth_x: i don't know how different the various exynos ports are, but i'd ask steve (search 9fans for exynos) (22:09:47) MrVandelay: k (22:10:08) anth_x: i'm off. good luck. (22:10:09) bjorkintosh: might be a rather interesting ci20 port too! (22:10:17) bjorkintosh: MrVandelay, heard of the ci20? (22:10:25) MrVandelay: Never (22:10:31) MrVandelay: I mean, this is my first cluster project (22:10:37) MrVandelay: anth_x: Thanks for all the help man! I really appreciate your time :) (22:10:40) bjorkintosh: it's MIPS based. 32 bit. (22:10:46) MrVandelay: bjorkintosh: Show me? (22:10:55) bjorkintosh: http://store.imgtec.com/us/product/mips-creator-ci20/ (22:11:51) MrVandelay: Off-hand it seems it should be much slower than the XU4's at $10 more? (22:12:15) MrVandelay: I know ARM isn't known for processor strength but octa-core 2ghz vs dual-core 1.2ghz? (22:12:48) bjorkintosh: i bought it because MIPS. (22:12:51) MrVandelay: The XU4 also has twice the amount of RAM and GigE (22:12:54) MrVandelay: Ah. (22:13:04) MrVandelay: I don't know anything about MIPS. What's special about it? (22:13:11) bjorkintosh: nothing. (22:13:14) bjorkintosh: nostalgia, in my case. (22:14:01) MrVandelay: Ah. Running some specific software? (22:14:07) bjorkintosh: yep. (22:14:09) bjorkintosh: Linux :D (22:14:12) bjorkintosh: https://www.youtube.com/watch?v=J1VE6C0H2bU (22:14:27) MrVandelay: I love that :D (22:14:33) bjorkintosh: nothing else has been ported to it as far as i know. (22:14:36) MrVandelay: Did you know that's actual a real life file browser? (22:14:40) bjorkintosh: yeah. (22:14:46) bjorkintosh: i have an Irix machine here. (22:14:56) bjorkintosh: haven't run it in years. (22:15:01) MrVandelay: Oh I've run IRIX. Was that MIPS? (22:15:21) bjorkintosh: the hardware was based on MIPS. yeah. (22:15:33) MrVandelay: I was using SGI o2's and Onyx'es when I was doing virtual reality modelling in Realax and 3d modelling in Maya (back when it was made by Alias|Wavefront) (22:15:56) MrVandelay: Their window manager was made entirely in vector :) (22:16:58) bjorkintosh: yeah. those were 64 bit MIPS machines. (22:17:21) MrVandelay: They were really.. really quite fast when it came to 3D modelling. I mean, it was such a long time ago. (22:17:34) MrVandelay: Maya isn't really the fastest program ever. (22:17:44) MrVandelay: Realax pretty much just crashed all the time. (22:18:05) MrVandelay: The SGI hardware had a tendency to blow up because of overheating though. (22:20:45) bjorkintosh: mine has 5 fans in it. (22:20:52) MrVandelay: bjorkintosh: Anyways, so yeah, I'm building a cluster. It's my first cluster and I am running individual arch linux installs on the nodes and just using MPICH/OpenMP for node-processing↵ (22:21:41) MrVandelay: I was looking into SSI operating system solutions however. Like MOSIX. Where the cluster appears as one computer for the applications so you don't have to code specifically for using the resources of the cluster (22:22:02) bjorkintosh: ah yeah. (22:22:09) bjorkintosh: that's a rather interesting project, i must say. (22:22:13) MrVandelay: Pretty much all SSI operating systems I found were dead since long ago. Inferno was listed, but I understand now that it's not exactly SSI in this sense (22:22:30) MrVandelay: I'm still interested in it now, in its own right, however. (22:22:43) MrVandelay: Unfortunately MOSIX is x86-64 only (22:23:44) MrVandelay: Yeah I love this project! I floated the idea of people getting access to my cluster (for learning to program OpenMP/MPICH etc) if they contributed a node. Since I'm on flatrate 1/1gbps, static ip's etc. (22:23:55) MrVandelay: And a couple of my friends contributed nodes and I bought 4 myself (22:25:15) bjorkintosh: oh nice. (22:27:55) MrVandelay: Feel free to contribute, if you have any use for remote access to experiment with MPICH/OpenMP/OpenCL etc cluster programming (22:28:36) MrVandelay: And Inferno if I get it up and running ^^ I'm admittedly terribly uninformed of Plan 9 and Inferno so far (22:31:29) bjorkintosh: MrVandelay, there's quite a bit of information out there on inferno/plan9. (22:31:41) MrVandelay: Yeah I'm reading as much as I can (22:32:05) MrVandelay: I was reading the white paper on Plan 9 and at first, it's quite confusing, since it's pretty different from anything I'm used to (22:32:05) bjorkintosh: http://doc.cat-v.org/plan_9/ (22:32:08) bjorkintosh: there is so much. (22:32:13) bjorkintosh: yeah. (22:32:34) MrVandelay: So I have to take breaks to absorb it. (22:32:45) MrVandelay: I'm starting to grasp the concepts, and see the potential however (22:33:09) MrVandelay: I'd still love to run MOSIX. But that will be a separate x86-64 cluster, since that's the only thing it supports. (22:36:52) MrVandelay: I searched the 9fans archives and Exynos5 support HMP or not seems to be sketchy at best (22:37:37) MrVandelay: I think running Inferno hosted would be the best and fastest option as all cores will be activated in Linux. That should totally kill any overhead from the hosted environment. Especially since I use very minimal linux installations (22:38:05) MrVandelay: and running plan9 virtualized will probably be worse than limbo+hosted inferno performance wise (22:40:18) bjorkintosh: oh you're trying to take advantage of limbo! (22:40:21) bjorkintosh: makes sense. (22:40:31) bjorkintosh: so many projects. so little time. (22:40:37) MrVandelay: Well, not necessarily (22:40:47) MrVandelay: I mean. I code in C and it's all going to be on the same arch (22:41:00) MrVandelay: So Plan 9 would make more sense than Limbo (22:41:43) MrVandelay: But the hardware support for running Plan 9 natively just isn't there. It seems as if it's hard enough to get it to boot natively on Exynos5 without sketchy stability (22:41:52) MrVandelay: and even if it does, you lose half the cores. (22:42:27) MrVandelay: I can't run it hosted, so I'd have to run the entire plan 9 virtualized which would kill the performance benefits worse than hosted-limbo and limbo's vm (22:42:36) MrVandelay: err, hosted-inferno* (22:43:20) bjorkintosh: right. (22:43:45) MrVandelay: Would I get anything out of hosted inferno performance wise other than nostalgia? (22:44:06) MrVandelay: If the performance penalty is bad, then it doesn't make any sense (22:44:53) MrVandelay: I'm generally looking for ways to take advantage of the cluster, basically. I don't specifically have to run Inferno or Plan 9. (22:47:12) MrVandelay: OpenMP and MPICH works well but I have to have customized code for pretty much anything that takes advantage of the cluster. Inferno's ability to execute processes on remote processors is excellent (22:48:38) mtgx` [~mtg@hosaka.mteege.de] entered the room. (22:49:18) MrVandelay: Also. Unless the hosted-inferno process supports threads it's going to only use one core of each node in the cluster I suppose? (22:49:32) bjorkintosh: cool. (22:50:56) MrVandelay: It's not cool. What are your thoughts? :) (00:01:11) Fish left the room (quit: Ping timeout: 246 seconds). (00:06:50) Fish [~Fish@AStLambert-651-1-14-122.w90-2.abo.wanadoo.fr] entered the room. (01:31:47) qrstuv: "all the big cluster projects i know of that have used inferno have used it hosted on top of plan9 or linux." (01:32:00) qrstuv: anth_x: what big cluster projects used inferno? (01:34:49) anth_x: i forget which, but one of the hare/BGP things used it for job control/dispatch. (01:34:59) anth_x: VN's also done a bunch of smaller grid thingies. (02:00:53) doublec: MrVandelay: You could run a hosted inferno process on each core to utilize them to work around hosted infernos single threadedness. A bit of a pain though. (02:28:57) anth_x: it's not really true that emu's single-threaded, although it can be hard to reason about when new native threads get created if you care about the details. (02:32:31) anth_x: it's probably closer to think of hosted inferno as signle-threaded-with-some-exceptions, i guess. (03:13:33) The account has disconnected and you are no longer in this chat. You will automatically rejoin the chat when the account reconnects.