The separating isosurface (contours) of two classes of object in 2D space

Started by
10 comments, last by taby 1 year, 12 months ago

I have written a tiny paper at https://github.com/sjhalayka/papers/raw/main/separation/separation.pdf

Do any comments or questions arise?

Advertisement

I already fail on the first sentence, but that's just me.

I imagine the problem is to distribute a set of points from two classes over the whole area, so you know for any location how much it is affected by either class.
A trivial solution could be to make a delauney triangulation from the points, then use it's dual to form polygons around points, and color them by class. This would not be smooth or weighted ofc.

But what i really wonder is: For what do you need ray triangle intersection here?

Good day sir. Thanks for your reply.

One need not perform ray-triangle intersection, if one instead chooses to use the equation that was used to generate the marched squares in the first place. I pretty much arbitrarily chose to go the triangle route. It should be faster, especially when using a quadtree, when there are thousands of training points to consider Versus just a few triangoes.

Thanks so much for humouring me. ?

taby said:
Thanks so much for humouring me.

I wasn't. I just hoped, if you explain the purpose of rays, i could understand what's the problem and which tools you use to solve it. E.g. those circular loops you get look interesting, but i have no idea how you get them.

But from what you just said, maybe you're not using ray triangle intersection, but point in triangle test? The former often includes the latter, doing it after calculating a plane - ray intersection to get the point. But there still is a difference between those two.
So maybe that's indeed something you should fix in the paper, if it's just the test.

yes I am technically using a point inside triangle test, since the z component is always 0.

The triangles and line segments define the area and outline of each class. Once it has been trained, I use the point in triangle test to see where a test point lies.

taby said:
yes I am technically using a point inside triangle test, since the z component is always 0.

Ray - triangle intersection in 2D still makes sense, e.g. if we did some projectile - enemy collision detection in a 2D game. So i would change that mention of rays in the paper, because it's confusing rays (which are not used) with points.

taby said:
The triangles and line segments define the area and outline of each class. Once it has been trained, I use the point in triangle test to see where a test point lies.

I still don't get it. Currently i think it works like this:

Your initial data is the red / blue points?
Then you rasterize them to an image (and eventually blur it)?
Then some NN turns that image (which looks sparse) into a smooth image, where color is diffused over the whole area? (why not just blur it?)
Then you generate a mesh using the image as density to drive marching squares.

But i surely miss some things, like a need for those curvy loops, which are generated for each input point, maybe?

I took a week off of the project to de-familiarize myself with the subject matter. This should help me see what needs to be clarified.

Thanks so much for your questions and comments.

JoeJ said:

taby said:
yes I am technically using a point inside triangle test, since the z component is always 0.

Ray - triangle intersection in 2D still makes sense, e.g. if we did some projectile - enemy collision detection in a 2D game. So i would change that mention of rays in the paper, because it's confusing rays (which are not used) with points.

Will do.

taby said:
The triangles and line segments define the area and outline of each class. Once it has been trained, I use the point in triangle test to see where a test point lies.

I still don't get it. Currently i think it works like this:

Your initial data is the red / blue points?
Then you rasterize them to an image (and eventually blur it)?
Then some NN turns that image (which looks sparse) into a smooth image, where color is diffused over the whole area? (why not just blur it?)
Then you generate a mesh using the image as density to drive marching squares.

Yep, the input data are the points. I create a 2D scalar field that is coloured based on how close the field location is to all of those points. The colour falls off with distance, unlike long-range electromagnetism and gravitation, which falls off with distance squared.

Then I blur the image using Gaussian blur in OpenCV. As you notice, the colour is spread/generated by the blur.

Finally, I generate the mesh from the blurred image, using Marching Squares.

But i surely miss some things, like a need for those curvy loops, which are generated for each input point, maybe?

The curvy loops aren't really necessary. I will take this into consideration.

Thanks again!

… almost demystified :D

Two questions remain:

You mentioned a training process in the paper, which misses from your given list of processing. I was assuming there is some NN stuff going on, so that's maybe another source of confusion.

taby said:
The curvy loops aren't really necessary.

But i still wonder how you get them and they look interesting. Now i guess it could work like this:
For a given location, sum up the distances to all points, also track the closest point.
Search for spots where the distance to the closest point is 10%, 20%, 30%… from the sum/average of all distances, and draw them so we get a loop around the closest point, which is not just a perfect circle.

taby said:
This should help me see what needs to be clarified.

I do not really intend to help out on that. Knowing nothing about statistics i miss any background knowledge, and i'm not the kind of person which would read the paper.
So i can not really seriously review the paper. Just discussing out of personal curiosity.
But you know that ; )

This topic is closed to new replies.

Advertisement