Instead of a camera you could in theory do everything yourself. All it does is encapsulate certain rendering parameters (position, rotation, projection) and when you "render" a camera it just issues draw calls of all renderers which may overlap with it's frustum. In the end a camera is not really "a thing". You would usually use either the pixelRect or the rect of the camera. Of course you have to setup the area / screen rect of the camera before you draw it. Of course this has additional GPU overhead and makes re-scaling an issue.Though if you just want a rough overview, this should probably work as well.Ĭlick to expand.No You just call cam.Render() and the camera would be rendered into the current rendering context which is the editor window at the time OnGUI / OnInspectorGUI is called (of course, you should only draw it during the Repaint event). You would just use the RenderTexture as a texture inside your UI. Drawing a camera is probably the most dynamic element you could think of ^^.Īn alternative could be to setup your camera with a RenderTexture and let it do it's job independently. This reduces flexibility that the IMGUI system provided. Of course the idea of the UIElements / UI Toolkit is that you setup your static UI element tree which gets drawn automatically for you. It may be a bit outdated (the first version was written around 2013) but may provide some ideas. In old my UVViewer I don't render any cameras but I make extensive use of the GL class to draw into my editor window as well as the scene view. However inside the editor / editor window things gets more complicates due to the "tab" header of a floating container window or the general offset of an Editor window inside the main window of the Unity editor. This conversion is quite trivial in a built game as it's essentially just flipping the y coordinate. This is usually the part which is the most fiddly since the camera requires screen coordinates which start at the bottom left corner of the window / rendering context while GUI coordinates start at the top left. Though that means you have to setup the rectangle yourself. I usually draw the camera manually by calling cam.Render(). I don't think I ever used Handles.DrawCamera. The relevant code for the camera views is in the "MainRenderer.cs" file. If you're interested in the code, the whole project is linked in the top left as a zip file. Just as an example, in this webGL example I actually use two cameras and one is rendered "into" / on top of a IMGUI window. I barely have used the UI toolkit and I'm more familiar with the "old" IMGUI. When you use the UI Toolkit, you could possibly use a IMGUIContainer which allows you to embed some classical IMGUI code inside an UI element. A panel contains objects / elements and not methods. Since DrawCamera does draw the camera immediately, this is just the wrong approach. DrawCamera is a method and you just called the method in your code and tried to pass the return value to the Add function of your panel. Your code does not work as it does not make much sense. Though you seem to use the UI Toolkit, so you're currently mixing some terminology. Well, you can draw a camera in a GUI window.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |