Conversation with #inferno at Fri May 13 19:01:58 2011 on powerman-asdf@irc.freenode.net (irc) (22:39:34) powerman-asdf: how to correctly start new C threads from C code in inferno kernel? where I can find examples of such code? (22:40:20) Fish- [~Fish@9fans.fr] entered the room. (22:41:44) vsrinivas: kproc() is how. i think sys-stream does? (22:46:30) powerman-asdf: vsrinivas: no, sys->stream does not. it's just release() like sys->read (22:47:04) vsrinivas: okay. hrm. but kproc() is how to kick one off. (22:48:53) powerman-asdf: sys->export with ASYNC flag probably should (22:52:18) powerman-asdf: also, I see in _some_ places calls to osenter/osleave - what's they needed for? (23:38:21) fgudin [~none@cl-370.mrs-01.fr.sixxs.net] entered the room. (23:47:44) powerman-asdf: I need to send about 10000 data items per second to tcp client synchronously with calculating this data. I.e. main loop looks like: calculate one data item; send it to all connected clients; repeat. Some clients may be slow, and thus slowdown overall system. What is simplest way to detect them? (23:50:39) powerman-asdf: One idea is measure time spend for each write()/flush() to every client. But this mean I'll need to call sys->millisec() twice per each write()/flush(), i.e. 10000*2*amount_of_connected_clients times per second. That's too much for it, sys->millisec() doesn't fast enough to be used in this way without noticeable slowdown for overall system. (23:52:15) powerman-asdf: With async i/o I'm usually just use large enough buffer per each client, and detect slow clients by buffer overflow (which never happens for fast enough clients). (23:55:40) powerman-asdf: But with sync i/o this doesn't work. And I need sync in this task, because it doesn't have sense to calculate data faster than clients process it (and calculating it faster result in stealing CPU time from clients, so they will process data even slower, with side effects resulting in significant overall slowdown because of very hard to detect reasons). (00:10:05) powerman-asdf: Right now it looks like I need some way to get current time (milli- or nano- seconds) very fast and without noticeable CPU load. For example, by simply reading int variable in Limbo, which value will be updated by separate kproc running in background. All limbo apps may share same variable, effectively reusing same kproc for updating it. (00:12:43) powerman-asdf: But this mean implementing one more C module, which is probably overkill for such a simple task. So, is there are other, simpler ways to detect "slow client"? (00:44:32) Fish- left the room (quit: Quit: So Long, and Thanks for All the Fish). (01:22:12) fgudin left the room (quit: Remote host closed the connection). (11:26:27) Fish- [~Fish@9fans.fr] entered the room. (12:33:30) Fish- left the room (quit: Quit: So Long, and Thanks for All the Fish). (14:12:24) Fish- [~Fish@coss6.exosec.net] entered the room. (14:15:41) Fish left the room (quit: Ping timeout: 240 seconds). (15:02:19) Fish [~Fish@bus77-2-82-244-150-190.fbx.proxad.net] entered the room. (15:33:01) Fish left the room (quit: Quit: So Long, and Thanks for All the Fish). (15:33:53) Fish [~Fish@bus77-2-82-244-150-190.fbx.proxad.net] entered the room. (17:04:42) anth_x: i can think of a few ways to do what you want if you get issue 263 addressed, but otherwise you've covered the examples i can think of. (17:05:04) anth_x: and now, breakfast. (19:21:17) The account has disconnected and you are no longer in this chat. You will be automatically rejoined in the chat when the account reconnects.