[cgl_discussion] v2.5 TIPC TODO list

Mika Kukkonen mika at osdl.org
Thu Mar 13 13:00:20 PST 2003

On to, 2003-03-13 at 12:19, Jon Maloy wrote:
> >	1) Spurious printk's to the console:
> >             - bloody annoying
> >	     - fix is to replace every printf* from the code
> >	       with the err(), warn(), info(), and dbg()
> >
> This will fix the different printout functions to a certain "debug level".
> Probably ok, since this is the way they are used anyway, and I was never
> happy with printf0(), printf1() etc in the first place. But keep the
> target macros !

You mean that I should preserve the functionality that lets you select
(on compile time) whether the output goes to console, debug buffer (or
was it dmesg buffer) or to a log file?

I felt this was redundant (you can do tail -f on log file anyway), but
it is easy to put back in if it is really needed.

> >	2) Decoupling of dbg_buf, stats_buf and shutting down
> >	   the raw bearers:
> >	     - currently if you turn off TIPC_DBG_BUF you lose
> >	       statistics and more importantly shutting down
> >	       the raw bearers
> >	     - I have my doubts about the usability of the whole
> >	       dbg_buf concept, but at least currently plan is not
> >	       to diverge on functionality, so it should stay
> >
> Please don't remove it. You will regret it. (This is not a threat ;-)  ).  
> Having doing trouble shooting in this stack for a number of years I know 
> what I am talking about.

Having done same kind of thing on Nokia's propietary OS, I have a pretty
good picture about the wanted functionality too ;-) 

Unfortunately I do not see how to do same thing on Linux (i.e. monitor
kernel memory in real time), unless you are willing to start reading
/dev/kmem directly (and how do you find the location)?

But anyway, as I stated, I am willing to keep it in if it can be turned
off on compile time.

> >	     - I partly started separation of dbg_buf and stats_buf,
> >	       but code duplication is annoying
> >
> I don't understand the reason for this. debug_buf() is evidently a 
> misnomen,
>  since it also is used to format statistics, but the functionality 
> remains the same.
> What differs in the two cases is only to which memory area they
> are formatting, and the purpose of it. Why not simply rename it to 
> print_buf(),
> string_formatter() or similar and use it the way it is. Code duplication
> should be unecessary here.

Because I want to turn off debug buffer (i.e. not compile in) and still
have access to statistics (and be able to shutdown raw bearers). But you
are right, the way I started doing it was a wrong approach, and some
superset functionality should be investigated. But shutting down bearers
definitely should not use the information stored in debug and stat

> >	7) Replacing long macros (#define foo() with several lines
> >	   of code) with static inline's
> >	        - plenty of those ...
> >
> Macros are used in the core because certain compilers (at least one used
> within Ericsson)  don't support inlining. I can't see that they hurt, 
> except from an  aestethic viewpoint. But once again, I know you guys don't
> care about portability...

Well, there seems to be strong dislike against macros in kernel
community, and as gcc supports inline's (which offer type checking ala
C++), replacing macros with inline's at least on multiline macros seems
to be a needed thing.

> >     	  TIPC (Jon's code)	v2.5 TIPC
> >		  =================	===========
> >		  Network		network
> >		  Zone			zone
> >		  Subnetwork		cluster
> >		  Manager		manager
> >		  Processor		worker
> >		  Device Processor	slave
> >	     - This really is not very important, and is better done
> >	       when you are doing modifications anyway to the code
> >
> Manager, worker, slave ?  :-)
> It is funny, but at least the term 'worker'  is not very illustrative 
> for what it represents in the structure. I would still prefer 'node' 
> or 'processor'. 

The trouble is that people I have talked immediately think Processor =
CPU, which does not hold (i.e. you can have many processors inside one
CPU). If it would hold, then "node" would be a logical choice (it seems
that clustering people want to say "node = OS instance").

Actually so far closest thing to "node" seems to be the manager, but I
am OK with it being manager. I am not too happy with slave and worker
either, but it seems that there really is no good word. And anyway, it
will be the LKML people who will say the last word here.

This naming thing is really annoying, as technically it does not matter
a thing, but every person I have talked seems to prefer different set of
names ... <sigh>.


More information about the cgl_discussion mailing list