Therefore, you should import image_and load various Image Mappers with different arguments.
Then, you should create a "dummy event" for different telescopes and expand a dimension (axis=1).
BTW @shikharras @Shreyansh Tripathi @parthpm @gremlin97, you can actually test your script with a hdf5 file of magic events.
This file contains 10 dummy events for the MAGIC telescope.
The Gradle Kotlin DSL: Because Groovy and I have parted ways. It helps keep you honest, helps you enforce your own rules. Sure, it adds some complexity and might feel unjustified for a small service. Importantly, contains no references to transport concerns like JSON, specific persistence technologies or is an anti-pattern at the domain layer because it complicates testing and creates a degree of temporal coupling. Will contain things like JSON transformers, REST end-points, message handlers, event publishers, database repositories, scheduled events, and so forth The ports module contains interfaces and DTOs (which will be Kotlin data classes in our case).
But if you’re producing multiple services you only take that complexity hit once — then you have a simple template to follow for all the rest. Pass in dates and timestamps from the adapter layer if you need them. It should have no actual logic and therefore not really need any tests.There are mainly two ways of dealing with raw IACT images captured by cameras made of hexagonal lattices of photo-multipliers.You can either transform the hexagonal camera pixels to square image pixels #56 or you can modify your convolution and pooling methods.Create the final images using the function "map_image()" and plot them, so that you can compare them (and your script) with test_image_mapper.ipynb. We are reading the pixel positions of the IACTs from the fits file in "ctlearn/ctlearn/pixel_pos_files/", which originate from ctapipe-extra.Hello @Tjark Miener, I noticed that in the 'image shifting' section in the 'image_mapping.py', we have shifted the alternate columns by 1 without checking if they are in the required form as shown here: https://github.com/ai4iacts/hexagdly/blob/master/notebooks/how_to_apply_adressing_scheme.ipynb Should our script to test the code contain images which are not aligned in this particular way, or is the input to our CNN always in the correct form? These fits files also contain rotation information.Spent more time in your application than in the PR!My email is [email protected] case you want to have some feedback for your application. Hi everybody, I am a graduate in physics from the Complutense University of Madrid.The four plots are the outputs of plain hexagonal convolution for 4 different stride sizes.The following is the code for the plot: The result after first epoch I hope that I've explained it clearly. Thank you @h3li05369 For the time being, we aren't allow to share CTA private data with you or any non CTA member. A workaround here would be that you fork the CTLearn project, make your changes and then I could set up some runs for you on our gpus.I am currently studying a MS in astrophysics and I have started working on hexagonal convolution.I want to contribute to this CTLearn issue under the GSo C project.