Home » Archived » Sketch » FlexiTools'2010
| |
Re: FlexiTools'2010 [message #528012 is a reply to message #528011] |
Sun, 18 April 2010 18:17 |
|
Hi Miles, I agree, when building such a tool on top of Eclipse it might be possible to walk towards a more formal modeling,
starting from sketches. It is quite a challenge to make this easy and flexible enough, though.
We are hoping to get the project approved and lots of feedback and contributions.
By the way, feel free to add yourself here: http://wiki.eclipse.org/Sketch/Proposal
Thank you for the comments :)
Miles Parker escreveu:
> Ugo,
>
> That is very very cool. Some friends and I have been talking about the
> idea of supporting modeling of human and natural systems by non-experts
> using just such a technique -- where people without training could
> "invent" their own graphic language and over time it could become more
> formal. It would be neat to mash this up with one of those DIY sketch
> board and projector designs..I forget what they're called..
> cheers,
>
> Miles
|
|
| |
Re: FlexiTools'2010 [message #528017 is a reply to message #528011] |
Sun, 18 April 2010 18:28 |
|
BTW, the smart boards (i guess this is how they are called) unfortunately does not come close to tablets,
in terms of precision. I've made some tests and the result is not very good. Maybe in the future... :)
But the whole idea of "the modeler of everything" seems promising, since users might as well propose new schemas to existing models,
or map their models to the ones made by programmers, dont you think?
Ugo
Miles Parker escreveu:
> Ugo,
>
> That is very very cool. Some friends and I have been talking about the
> idea of supporting modeling of human and natural systems by non-experts
> using just such a technique -- where people without training could
> "invent" their own graphic language and over time it could become more
> formal. It would be neat to mash this up with one of those DIY sketch
> board and projector designs..I forget what they're called..
> cheers,
>
> Miles
|
|
|
Re: FlexiTools'2010 [message #528021 is a reply to message #528017] |
Sun, 18 April 2010 19:14 |
Miles Parker Messages: 1341 Registered: July 2009 |
Senior Member |
|
|
Ugo Sangiorgi wrote on Sun, 18 April 2010 14:28 | BTW, the smart boards (i guess this is how they are called) unfortunately does not come close to tablets,
in terms of precision. I've made some tests and the result is not very good. Maybe in the future...
|
Yes, that makes sense. I haven't had a chance to try them out much. The ones I've seen are very much a DIY thing. I saw a demo of one that a local artist built out of a plywood box and an old projector. It seemed to work well for mouse type and gesture behavior -- grabbing objects, twisting and scaling them..etc.. I wonder if the kind with LEDs around the edges as oppposed to the ones that use light scattering would be precise enough. Or perhaps you could have a zoom mode so that the users did something like:
1. Swipe the area that they want to draw.
2. Draw the image across a large part of the screen.
3. Click a hot-spot and have the drawer return to "swipe mode".
I guess I'm thinking something like the gesture mode of the old Palms had, though in that case there was a separate area for gestures. That might be a better way to handle it actually. But getting pretty far afield now..
Quote: | But the whole idea of "the modeler of everything" seems promising, since users might as well propose new schemas to existing models,
or map their models to the ones made by programmers, dont you think?
|
Yes that's right. Its actually an idea that a friend of mine had when we were discussing real world systems. The idea is that you'd have a sort of evolving meta-model. This process might be a sort of 80/20 thing where expert mentors helped guide and reify the process. Then that would either create an entirely new representational scheme or as you suggest map to an existing meta-model through some sort of M2M scheme. The specific target for me is to be able to allow non-experts to end up with models that are defined in the Agent Modeling Framework, as then people would actually be able to push a button and see their models run -- with 3-dimensional graphics, etc..!
If this worked we'd have all of technical pieces though there is of course a tremendous amount of devil in the details -- both from a technical standpoint but also from a cultural process POV. Super ambitious and obviously outside of current scope LOL.
-Miles
|
|
|
Re: FlexiTools'2010 [message #528031 is a reply to message #528021] |
Mon, 19 April 2010 00:14 |
|
Miles Parker escreveu:
> Ugo Sangiorgi wrote on Sun, 18 April 2010 14:28
>> BTW, the smart boards (i guess this is how they are called)
>> unfortunately does not come close to tablets,
>> in terms of precision. I've made some tests and the result is not very
>> good. Maybe in the future... :)
>
>
> Yes, that makes sense. I haven't had a chance to try them out much. The
> ones I've seen are very much a DIY thing. I saw a demo of one that a
> local artist built out of a plywood box and an old projector. :) It
> seemed to work well for mouse type and gesture behavior -- grabbing
> objects, twisting and scaling them..etc.. I wonder if the kind with LEDs
> around the edges as oppposed to the ones that use light scattering would
> be precise enough. Or perhaps you could have a zoom mode so that the
> users did something like:
>
> 1. Swipe the area that they want to draw.
> 2. Draw the image across a large part of the screen.
> 3. Click a hot-spot and have the drawer return to "swipe mode".
> I guess I'm thinking something like the gesture mode of the old Palms
> had, though in that case there was a separate area for gestures. That
> might be a better way to handle it actually. But getting pretty far
> afield now..
>
> Quote:
>> But the whole idea of "the modeler of everything" seems promising,
>> since users might as well propose new schemas to existing models,
>> or map their models to the ones made by programmers, dont you think?
>
>
> Yes that's right. Its actually an idea that a friend of mine had when we
> were discussing real world systems. The idea is that you'd have a sort
> of evolving meta-model. This process might be a sort of 80/20 thing
> where expert mentors helped guide and reify the process. Then that would
> either create an entirely new representational scheme or as you suggest
> map to an existing meta-model through some sort of M2M scheme. The
> specific target for me is to be able to allow non-experts to end up with
> models that are defined in the Agent Modeling Framework, as then people
> would actually be able to push a button and see their models run -- with
> 3-dimensional graphics, etc..! :d
That would be amazing, I have a strong belief that users might come up with their
own solutions to their problems if we put simple tools on their hands. After all they know
their problems better than anyone, and there is not much of us, coders, in the world (fortunately?) :)
>
> If this worked we'd have all of technical pieces though there is of
> course a tremendous amount of devil in the details -- both from a
> technical standpoint but also from a cultural process POV. Super
> ambitious and obviously outside of current scope LOL.
>
I agree that is quite a challenge, more from the technological POV than from the cultural, maybe.
I've been studying a lot of Semiotics lately, the models from both sides might match and structures be resignified,
the problem resides more on the technological support.
Although we could speak metaphorically, we cant program in the same way.. abstraction is just not enough.
What you suggested is just outside the immediate scope, it is exactly this kind of end-user approach focus I would like Sketch to reach out.
Thank you for your support
Ugo
|
|
|
Re: FlexiTools'2010 [message #528032 is a reply to message #528016] |
Mon, 19 April 2010 00:21 |
|
Miles Parker escreveu:
> Ugo Sangiorgi wrote on Sun, 18 April 2010 14:17
>> Hi Miles, I agree, when building such a tool on top of Eclipse it might be possible to walk towards a more formal modeling, starting from sketches. It is quite a challenge to make this easy and
>> flexible enough, though.
>
>
> Yes, that is probably an understatement. Very worthy goal though. Though I see that this tool would have a lot of value in just allowing people to make freehand sketches that instantiate
> traditional meta-models, I think by far the most interesting use case is as a collaboration tool to allow people to iterativly come up with semi-formal representations of their problem domains.
>
> Quote:
>> We are hoping to get the project approved and lots of feedback and contributions. By the way, feel free to add yourself here: http://wiki.eclipse.org/Sketch/Proposal
>
>
> Done.. definitely support this worthwhile project and while I prob. can't contribute to the technology, I would be happy to be an inform mentor as I've been through the process somewhat recently.
Thanks again for you support, Miles.
I have a little knowledge of multi-agent systems, but once the code gets finally published you will
be able to see how a true MA methodology might fit on the recognizer -- it is currently implemented
as bare threads, no protocols or message exchanging.
:)
cheers,
Ugo
|
|
|
Re: FlexiTools'2010 [message #528228 is a reply to message #528031] |
Mon, 19 April 2010 18:37 |
Miles Parker Messages: 1341 Registered: July 2009 |
Senior Member |
|
|
Ugo Sangiorgi wrote on Sun, 18 April 2010 20:14 |
I agree that is quite a challenge, more from the technological POV than from the cultural, maybe.
I've been studying a lot of Semiotics lately, the models from both sides might match and structures be resignified,
the problem resides more on the technological support.
Although we could speak metaphorically, we cant program in the same way.. abstraction is just not enough.
|
We know the world is getting interesting when computer programmers start studying literary theory and people can show up at science meetings talking about things like "discourse communities" without being thrown out of the room.
That is an interesting point about metaphors, abstractions and generated "real" code -- I'm still trying to wrap my head around it. I saw a husband and wife team -- with both having technical and arts background -- discuss a system where people simply put together a mental map of their surroundings -- how they related to different household objects and simple concepts. It was just a set of CRC triples, bot nearly rich enough I don't think, but it did make me think that perhaps the issue isn't abstractions per se. If we think about it in terms of patterns -- which your system already does on the most basic generalization -> individual instantiation mapping -- then the abstractions don't need to be as explicit.
I think a major concern of the person I was talking to about this was how representational systems -- i.e. the meta-model -- themselves bias how we think about a problem. And this is something that is being struggled with in a number of domains and methodologies. For example -- I think the canonical example really -- think about how putting formal analytical mathematics (ODE / PDE) as the core metaphor for all science modeling has distorted the science itself! So the point is to leave this sort of open ended and then have a way of gradually moving from the community understanding to a more formal / machine-friendly representation, without losing an understanding about all of the compromises that need to be made to force ideas into a given representational box.
We don't need to privilege the idea that there is one an only one possible real abstract interpretive story to a particular set of pattern relations. But then at some point you do make the mapping to a more formal system. But that is a very clear and explicit mapping as well, so that the assumptions that you made to turn those repeated usage patterns into more rigorous abstractions is documented and transparent. Blah, blah blah, I'm not sure how any of that would actually work in practice but it would be worth trying.
Quote: | I have a little knowledge of multi-agent systems, but once the code gets finally published you will
be able to see how a true MA methodology might fit on the recognizer -- it is currently implemented
as bare threads, no protocols or message exchanging.
|
Interestingly, what we are doing with Agent-Based Models is technically simpler than most general implementations of MAS. There aren't complex communication or message passing protocols. Instead you're representing individual heuristic decision making processes within the context of a community of agents. There could be communication in this, but it (arguably) doesn't have to be distributed and asynchronous to get very interesting results that sometimes map very well to "real world" observation. So the basic idea is lots of very light-weight agents as opposed to relativly few heavy-weight agents.
Anyway, its neat to see that you have a much broader view of all of this, as I sort of intuited you would. I'll be looking forward to continuing the discussion as the technology moves forward.
|
|
| |
Re: FlexiTools'2010 [message #560381 is a reply to message #560375] |
Sun, 18 April 2010 18:17 |
|
Hi Miles, I agree, when building such a tool on top of Eclipse it might be possible to walk towards a more formal modeling,
starting from sketches. It is quite a challenge to make this easy and flexible enough, though.
We are hoping to get the project approved and lots of feedback and contributions.
By the way, feel free to add yourself here: http://wiki.eclipse.org/Sketch/Proposal
Thank you for the comments :)
Miles Parker escreveu:
> Ugo,
>
> That is very very cool. Some friends and I have been talking about the
> idea of supporting modeling of human and natural systems by non-experts
> using just such a technique -- where people without training could
> "invent" their own graphic language and over time it could become more
> formal. It would be neat to mash this up with one of those DIY sketch
> board and projector designs..I forget what they're called..
> cheers,
>
> Miles
|
|
|
Re: FlexiTools'2010 [message #560395 is a reply to message #560375] |
Sun, 18 April 2010 18:28 |
|
BTW, the smart boards (i guess this is how they are called) unfortunately does not come close to tablets,
in terms of precision. I've made some tests and the result is not very good. Maybe in the future... :)
But the whole idea of "the modeler of everything" seems promising, since users might as well propose new schemas to existing models,
or map their models to the ones made by programmers, dont you think?
Ugo
Miles Parker escreveu:
> Ugo,
>
> That is very very cool. Some friends and I have been talking about the
> idea of supporting modeling of human and natural systems by non-experts
> using just such a technique -- where people without training could
> "invent" their own graphic language and over time it could become more
> formal. It would be neat to mash this up with one of those DIY sketch
> board and projector designs..I forget what they're called..
> cheers,
>
> Miles
|
|
|
Re: FlexiTools'2010 [message #560400 is a reply to message #528012] |
Sun, 18 April 2010 18:45 |
Miles Parker Messages: 1341 Registered: July 2009 |
Senior Member |
|
|
Ugo Sangiorgi wrote on Sun, 18 April 2010 14:17
> Hi Miles, I agree, when building such a tool on top of Eclipse it might be possible to walk towards a more formal modeling,
> starting from sketches. It is quite a challenge to make this easy and flexible enough, though.
Yes, that is probably an understatement. Very worthy goal though. Though I see that this tool would have a lot of value in just allowing people to make freehand sketches that instantiate traditional meta-models, I think by far the most interesting use case is as a collaboration tool to allow people to iterativly come up with semi-formal representations of their problem domains.
Quote:
> We are hoping to get the project approved and lots of feedback and contributions.
> By the way, feel free to add yourself here: http://wiki.eclipse.org/Sketch/Proposal
Done.. definitely support this worthwhile project and while I prob. can't contribute to the technology, I would be happy to be an inform mentor as I've been through the process somewhat recently.
|
|
|
Re: FlexiTools'2010 [message #560407 is a reply to message #528017] |
Sun, 18 April 2010 19:14 |
Miles Parker Messages: 1341 Registered: July 2009 |
Senior Member |
|
|
Ugo Sangiorgi wrote on Sun, 18 April 2010 14:28
> BTW, the smart boards (i guess this is how they are called) unfortunately does not come close to tablets,
> in terms of precision. I've made some tests and the result is not very good. Maybe in the future... :)
Yes, that makes sense. I haven't had a chance to try them out much. The ones I've seen are very much a DIY thing. I saw a demo of one that a local artist built out of a plywood box and an old projector. :) It seemed to work well for mouse type and gesture behavior -- grabbing objects, twisting and scaling them..etc.. I wonder if the kind with LEDs around the edges as oppposed to the ones that use light scattering would be precise enough. Or perhaps you could have a zoom mode so that the users did something like:
1. Swipe the area that they want to draw.
2. Draw the image across a large part of the screen.
3. Click a hot-spot and have the drawer return to "swipe mode".
I guess I'm thinking something like the gesture mode of the old Palms had, though in that case there was a separate area for gestures. That might be a better way to handle it actually. But getting pretty far afield now..
Quote:
> But the whole idea of "the modeler of everything" seems promising, since users might as well propose new schemas to existing models,
> or map their models to the ones made by programmers, dont you think?
Yes that's right. Its actually an idea that a friend of mine had when we were discussing real world systems. The idea is that you'd have a sort of evolving meta-model. This process might be a sort of 80/20 thing where expert mentors helped guide and reify the process. Then that would either create an entirely new representational scheme or as you suggest map to an existing meta-model through some sort of M2M scheme. The specific target for me is to be able to allow non-experts to end up with models that are defined in the Agent Modeling Framework, as then people would actually be able to push a button and see their models run -- with 3-dimensional graphics, etc..! :d
If this worked we'd have all of technical pieces though there is of course a tremendous amount of devil in the details -- both from a technical standpoint but also from a cultural process POV. Super ambitious and obviously outside of current scope LOL.
-Miles
|
|
|
Re: FlexiTools'2010 [message #560413 is a reply to message #560407] |
Mon, 19 April 2010 00:14 |
|
Miles Parker escreveu:
> Ugo Sangiorgi wrote on Sun, 18 April 2010 14:28
>> BTW, the smart boards (i guess this is how they are called)
>> unfortunately does not come close to tablets,
>> in terms of precision. I've made some tests and the result is not very
>> good. Maybe in the future... :)
>
>
> Yes, that makes sense. I haven't had a chance to try them out much. The
> ones I've seen are very much a DIY thing. I saw a demo of one that a
> local artist built out of a plywood box and an old projector. :) It
> seemed to work well for mouse type and gesture behavior -- grabbing
> objects, twisting and scaling them..etc.. I wonder if the kind with LEDs
> around the edges as oppposed to the ones that use light scattering would
> be precise enough. Or perhaps you could have a zoom mode so that the
> users did something like:
>
> 1. Swipe the area that they want to draw.
> 2. Draw the image across a large part of the screen.
> 3. Click a hot-spot and have the drawer return to "swipe mode".
> I guess I'm thinking something like the gesture mode of the old Palms
> had, though in that case there was a separate area for gestures. That
> might be a better way to handle it actually. But getting pretty far
> afield now..
>
> Quote:
>> But the whole idea of "the modeler of everything" seems promising,
>> since users might as well propose new schemas to existing models,
>> or map their models to the ones made by programmers, dont you think?
>
>
> Yes that's right. Its actually an idea that a friend of mine had when we
> were discussing real world systems. The idea is that you'd have a sort
> of evolving meta-model. This process might be a sort of 80/20 thing
> where expert mentors helped guide and reify the process. Then that would
> either create an entirely new representational scheme or as you suggest
> map to an existing meta-model through some sort of M2M scheme. The
> specific target for me is to be able to allow non-experts to end up with
> models that are defined in the Agent Modeling Framework, as then people
> would actually be able to push a button and see their models run -- with
> 3-dimensional graphics, etc..! :d
That would be amazing, I have a strong belief that users might come up with their
own solutions to their problems if we put simple tools on their hands. After all they know
their problems better than anyone, and there is not much of us, coders, in the world (fortunately?) :)
>
> If this worked we'd have all of technical pieces though there is of
> course a tremendous amount of devil in the details -- both from a
> technical standpoint but also from a cultural process POV. Super
> ambitious and obviously outside of current scope LOL.
>
I agree that is quite a challenge, more from the technological POV than from the cultural, maybe.
I've been studying a lot of Semiotics lately, the models from both sides might match and structures be resignified,
the problem resides more on the technological support.
Although we could speak metaphorically, we cant program in the same way.. abstraction is just not enough.
What you suggested is just outside the immediate scope, it is exactly this kind of end-user approach focus I would like Sketch to reach out.
Thank you for your support
Ugo
|
|
|
Re: FlexiTools'2010 [message #560420 is a reply to message #560400] |
Mon, 19 April 2010 00:21 |
|
Miles Parker escreveu:
> Ugo Sangiorgi wrote on Sun, 18 April 2010 14:17
>> Hi Miles, I agree, when building such a tool on top of Eclipse it might be possible to walk towards a more formal modeling, starting from sketches. It is quite a challenge to make this easy and
>> flexible enough, though.
>
>
> Yes, that is probably an understatement. Very worthy goal though. Though I see that this tool would have a lot of value in just allowing people to make freehand sketches that instantiate
> traditional meta-models, I think by far the most interesting use case is as a collaboration tool to allow people to iterativly come up with semi-formal representations of their problem domains.
>
> Quote:
>> We are hoping to get the project approved and lots of feedback and contributions. By the way, feel free to add yourself here: http://wiki.eclipse.org/Sketch/Proposal
>
>
> Done.. definitely support this worthwhile project and while I prob. can't contribute to the technology, I would be happy to be an inform mentor as I've been through the process somewhat recently.
Thanks again for you support, Miles.
I have a little knowledge of multi-agent systems, but once the code gets finally published you will
be able to see how a true MA methodology might fit on the recognizer -- it is currently implemented
as bare threads, no protocols or message exchanging.
:)
cheers,
Ugo
|
|
|
Re: FlexiTools'2010 [message #560427 is a reply to message #528031] |
Mon, 19 April 2010 18:37 |
Miles Parker Messages: 1341 Registered: July 2009 |
Senior Member |
|
|
Ugo Sangiorgi wrote on Sun, 18 April 2010 20:14
> I agree that is quite a challenge, more from the technological POV than from the cultural, maybe.
> I've been studying a lot of Semiotics lately, the models from both sides might match and structures be resignified,
> the problem resides more on the technological support.
> Although we could speak metaphorically, we cant program in the same way.. abstraction is just not enough.
We know the world is getting interesting when computer programmers start studying literary theory and people can show up at science meetings talking about things like "discourse communities" without being thrown out of the room. ;)
That is an interesting point about metaphors, abstractions and generated "real" code -- I'm still trying to wrap my head around it. I saw a husband and wife team -- with both having technical and arts background -- discuss a system where people simply put together a mental map of their surroundings -- how they related to different household objects and simple concepts. It was just a set of CRC triples, bot nearly rich enough I don't think, but it did make me think that perhaps the issue isn't abstractions per se. If we think about it in terms of patterns -- which your system already does on the most basic generalization -> individual instantiation mapping -- then the abstractions don't need to be as explicit.
I think a major concern of the person I was talking to about this was how representational systems -- i.e. the meta-model -- themselves bias how we think about a problem. And this is something that is being struggled with in a number of domains and methodologies. For example -- I think the canonical example really -- think about how putting formal analytical mathematics (ODE / PDE) as the core metaphor for all science modeling has distorted the science itself! So the point is to leave this sort of open ended and then have a way of gradually moving from the community understanding to a more formal / machine-friendly representation, without losing an understanding about all of the compromises that need to be made to force ideas into a given representational box.
We don't need to privilege ;) the idea that there is one an only one possible real abstract interpretive story to a particular set of pattern relations. But then at some point you do make the mapping to a more formal system. But that is a very clear and explicit mapping as well, so that the assumptions that you made to turn those repeated usage patterns into more rigorous abstractions is documented and transparent. Blah, blah blah, I'm not sure how any of that would actually work in practice but it would be worth trying.
Quote:
> I have a little knowledge of multi-agent systems, but once the code gets finally published you will
> be able to see how a true MA methodology might fit on the recognizer -- it is currently implemented
> as bare threads, no protocols or message exchanging.
Interestingly, what we are doing with Agent-Based Models is technically simpler than most general implementations of MAS. There aren't complex communication or message passing protocols. Instead you're representing individual heuristic decision making processes within the context of a community of agents. There could be communication in this, but it (arguably) doesn't have to be distributed and asynchronous to get very interesting results that sometimes map very well to "real world" observation. So the basic idea is lots of very light-weight agents as opposed to relativly few heavy-weight agents.
Anyway, its neat to see that you have a much broader view of all of this, as I sort of intuited you would. I'll be looking forward to continuing the discussion as the technology moves forward.
|
|
|
Goto Forum:
Current Time: Sun Dec 22 06:03:01 GMT 2024
Powered by FUDForum. Page generated in 0.05188 seconds
|