A built-in map of the world allows it to determine country and nearest big city to tag it with. Location - Currently works off GPS data that cameras and phones often produce.The area of the each color within an image is used to calculate a significance weighting for each. Color - Starting with a simple palette, it finds the nearest color for each part of an image.This is quite a curious tool to play with and might bring some serendipitous results. Style - Trained on an image dataset labelled according to artistic styles such as geometric, minimalist, noir.This in turn helps with ordering results that are returned. The bounding box of each image is also stored, which can be used to determine its significance by size. Object - Based on Google’s models but runs entirely within Photonix so no images are shared externally.No training is required, just add your own labels to the people you recognise. Face - Detects faces and groups them with faces of the same person in your other photos.We currently provide the following smart tagging classifiers: Google has been very open with their technology and we have been able to leverage some of the same frameworks and pretrained models that they have created. Google Photos has demonstrated what is possible in the area of object recognition and detection but there is a lot more that can be done. It was these developments that inspired me to embark on this project. In recent years, as you have probably seen in the news, there have been dramatic breakthroughs in the field of artificial intelligence (AI), machine learning and deep learning. We think our photo management software excels in the area of smart features. Smart features that a computer has learnt to identify - these could be objects that were detected, styles that are similar, colors that are visible and so on.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |