Start a new topic

Geo-based for indoor use

My problem is this: I would like to be ableto generate an app where the user used the cell phone to view the corridorof a building with distributed virtual objects. Its does not need navigation (the user would be stopped) and the building does not have other floors. My problem is that besides the user being able to see more than one virtual element in the internal environment, I would like to know if it is possible to modify the size / scale of these objects according to distance.

Is there such a possibility? Do you have a solution for that? Any tutorial? I use Wikitude with Unity.

By adding, I found this (in the attached image) on the site. I would like to know if there is a tutorial on how to develop something similar.



This navigation functionality was a proof of concept project that Wikitude developed in the past. Therefore, we do not offer any tutorials on how to achieve it.

Indoor navigation, to my very limited understanding of the topic, is  something that is not trivial to do. I believe there are solutions  available, but how well they work and what kind of technology they use  under the hood I cannot comment on. That, I'm afraid, you'll have to  investigate yourself. For instance, a navigation use case indoors can easily be achieved by using beacons and triangulation. So you can feed the SDK with the position of the beacons and use it to localize the user's position.

Our SDK products do not support lines at the moment, so you will need to implement the guiding functionality e.g. with dots displayed along the route on your end. We, unfortunately, don’t have any specific sample code for this.

I hope this helps 


Thanks for the reply Eva.

But my main question does not involve navigation. Actually I want to represent objects in a static environment. Just point the phone and view distributed objects. like in a room, for example. Geo-Based may not be the best option, but I also do not know if the image target or 3D tracking is more appropriate. The fact is that controlling the size of objects is something I have no idea how to do, to give a better sense of distance, just as I do not know which marker to use to distribute these elements in the scene. To better illustrate my idea attached below is a schematic drawing of what I would like to do. I emphasize that it is not navigation and that I can have an object as a placeholder in the environment, but I'm having difficulty implementing this and wonder if it's possible. You do not even have to interact, and the virtual elements can be two-dimensional. But I'd just like to solve this problem that I've been trying to produce for a while. Thanks in advance.



I am afraid that the details you have given me are a bit vague.You can display objects using either GeoLocation features, an Image as a target or an Object. You can set as many augmentations as you wish per target. For each augmentation that you are using, you can specify the dimensions by setting the height of it and the scale (check here for the ImageDrawable

If you can tell me your exact use case then maybe I can further help you with more information.



Most current location privacy protection schemes only account for the horizontal plane, making them vulnerable to location inference attacks when a user provides data that also includes the vertical plane.

trap the cat

Login or Signup to post a comment