tag:blogger.com,1999:blog-8398797088391606752.post6054180367255327062..comments2022-08-31T00:48:34.628-07:00Comments on PLEKTIX: Information and Structure in Complex SystemsBen Allenhttp://www.blogger.com/profile/15594823641514744644noreply@blogger.comBlogger8125tag:blogger.com,1999:blog-8398797088391606752.post-81424507687656164292015-01-05T20:01:22.159-08:002015-01-05T20:01:22.159-08:00Very interesting paper. The concepts suggest appl... Very interesting paper. The concepts suggest applications in computer science, and in particular to the area of software complexity metrics, an area that has long needed an objective, solid foundation.<br /><br />Currently, most software complexity metrics produce a single, scalar value. What we need is something like your complexity profile, which would give a series of metrics, one per scale. For a well-structured software system (with low coupling and high cohesion), the metric should give a result similar to your metric on a system of largely independent components. For a poorly-structured software system, the complexity profile should be relatively flat.<br /><br />One problem I encounter in applying your idea is that software really only has low-level couplings: Software components don't refer to other components in their entirety. Rather, lots of individual lines of code in one reference individual lines of code in others. But if I account for dependencies at the lowest scale, then there is nothing to count at higher scales. Yet intuitively, software modules and sub-systems do depend on other ones.<br /><br />Are you aware of anyone employing your ideas in this area? If so, how have they adapted it to quantifying software complexity?<br /><br />Thank you very much.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-41865457495780876462014-10-26T12:53:30.156-07:002014-10-26T12:53:30.156-07:00I agree -- looking at the scale that is producing ...I agree -- looking at the scale that is producing order (i.e. driving self-organization) helps to convert the system to a lower Shannon entropy. I am looking at doing this informally in finance, management and economics. My approach is to tag recurrent dynamics produced by complex systems and thereby compress information about the system. A good example is identifying a positive feedback loop and its associated resource constraint. Just doing this would have helped to predict the 2008 crisis (the feedback loop was in credit/housing, and the resource constraint was the population that qualified for a mortgage at the lower bound of mortgage underwriting standards). This alone would be an improvement over macroprudential analysis that tends to look only at mean and variance as system descriptors. <br /><br />What I would like to see is a merging of informal and formal approaches, and an explicit "fine-tuning" of the trade-off between rigor and relevance to decision making. Analysis of simple systems (mean/variance) have just such a "mix", and complex systems will get there eventually. Diegohttps://www.blogger.com/profile/18084671738464414141noreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-3966749106950736602014-10-26T09:16:55.915-07:002014-10-26T09:16:55.915-07:00Good question! I think the key to answering it is...Good question! I think the key to answering it is to think of information applying at different scales (see my reply to observingideas above). <br /><br />Of systems A, B, and C in my post, system B has the highest entropy. To describe it at the smallest scale requires a maximal amount of information. But if you want to look at its large-scale behavior, then the central limit theorem applies and you can describe very simply it using summary statistics like mean and variance (as you say).<br /><br />The starlings (example C) have reduced information at the smallest scale. This reduction occurs because information describing their individual movements is partly redundant: knowing the motions of one tells you something about what its neighbors are doing. But it is this very redundancy that enables their large-scale behavior, which does not "average out" when we look above the level of the individual.<br /><br />In our formalism, we would say that system C has less total information that system B, but the information in system C applies at a larger scale.<br /><br />The "true novelty" question is difficult for our formalism to answer, since it is purely descriptive. We address the "what" quantitatively, but the "why" and "how" must be left to future work.Ben Allenhttps://www.blogger.com/profile/15594823641514744644noreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-14321978898454631162014-10-26T09:02:04.417-07:002014-10-26T09:02:04.417-07:00Thanks! What we do, beyond traditional informatio...Thanks! What we do, beyond traditional information theory, is that we take the "scale" idea seriously. In our framework, all information applies at a particular scale. Large-scale information tells you a lot about the system, whereas small-scale information applies only to a isolated parts. <br /><br />In any system, there is a tradeoff between the total entropy (degree of freedom) and the scale at which it can act. For example, if ants want to move a large object, they need to behave in a highly coordinated fashion, reducing their collective degree of freedom. In return, they are able to perform a large-scale action.<br /><br />Our paper gives a mathematical formalism for quantifying this tradeoff.Ben Allenhttps://www.blogger.com/profile/15594823641514744644noreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-1308388801305890462014-10-25T10:17:04.162-07:002014-10-25T10:17:04.162-07:00Great intro. Raises a few questions in my mind. ...Great intro. Raises a few questions in my mind. <br /><br />Complex system outcomes that conform to normal statistical distributions lend themselves to "information compression", i.e. one can describe the system using mean and variance. However, most complex systems either do not output stable distributions, or their distributions do not conform to normal. In that case, their "compressibility" is lower/Shannon entropy higher. So the challenge with complex systems is that self-organization implies some lower entropy/higher order, but it is tough to translate this into a lower Shanon entropy. To do so requires discovering a shorthand way to describe the more-or-less stable higher-order dynamic at work. The lightbulb example is one in which the system outputs a nice statistical relationship. I just wonder what happens when there is true novelty produced, one that is also subject to non-normal behavior like feedback loops and/or phase shifts? I suppose it will be in your paper, but anticipating that it may be quite technical, perhaps you could explain how the overlap concept works at higher levels of complexity? Diegohttps://www.blogger.com/profile/18084671738464414141noreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-29116766143170606982014-10-25T06:48:05.882-07:002014-10-25T06:48:05.882-07:00Nice intro into the Information Theory. Too bad it...Nice intro into the Information Theory. Too bad it took that much space. I would very much love to read more about how all this related to the ideas presented in the newest paper "An Information-Theoretic Formalism for Multiscale Structure in Complex Systems" and in your project to characterize structure of complex systems in general.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-72102144673889486212014-10-24T20:23:39.762-07:002014-10-24T20:23:39.762-07:00That's an excellent point, and I've never ...That's an excellent point, and I've never thought about it before! I think it could indeed be useful do define "entropic warning signals". I know that systems sometimes cycle as they approach a tipping point. Entropy measures could detect the difference between this cycling (which could mean trouble) and random noise (meaning everything is normal).<br /><br />Fantastic idea! As to whether it's been done before, I don't know offhand.Ben Allenhttps://www.blogger.com/profile/15594823641514744644noreply@blogger.comtag:blogger.com,1999:blog-8398797088391606752.post-83719650968510597962014-10-24T13:10:12.214-07:002014-10-24T13:10:12.214-07:00In the geosciences many people use the autocorrela...In the geosciences many people use the autocorrelation function to describe the temporal structure of a time series. This is derived by computed the linear correlation for pairs of data points separated by certain temporal difference, for a large number of temporal differences.<br /><br />Theoretically it is possible that the scatterplot of the pairs shows no linear correlation, but that there is a relationship. For example all points could lie on a circle. With your dependency measure (entropy) you could see the difference between an uncorrelated cloud of points and point lying, e.g. on a circle. <br /><br />Do you know of anyone using an "auto-entropy function" to describe the temporal structure of a time series? Do you know of any applications where such a description of the temporal behaviour of a time series might be useful? Victor Venemahttps://www.blogger.com/profile/02842816166712285801noreply@blogger.com