Difference between revisions of "3D Model"

From
Jump to: navigation, search
Line 6: Line 6:
 
* [[3D Simulation Environments]]
 
* [[3D Simulation Environments]]
 
* [http://info.vercator.com/blog/what-are-the-most-common-3d-point-cloud-file-formats-and-how-to-solve-interoperability-issues Common 3D point cloud file formats & solving interoperability issues | Charles Thomson - Vercator]
 
* [http://info.vercator.com/blog/what-are-the-most-common-3d-point-cloud-file-formats-and-how-to-solve-interoperability-issues Common 3D point cloud file formats & solving interoperability issues | Charles Thomson - Vercator]
 +
  
 
== Geometric Deep Learning ==
 
== Geometric Deep Learning ==
Line 11: Line 12:
 
<youtube>be6Iw0QrI8w</youtube>
 
<youtube>be6Iw0QrI8w</youtube>
 
<youtube>D3fnGG7cdjY</youtube>
 
<youtube>D3fnGG7cdjY</youtube>
 +
 +
== [http://github.com/timzhang642/3D-Machine-Learning 3D Machine Learning | GitHub] ==
 +
 +
* [http://github.com/timzhang642/3D-Machine-Learning#courses Courses]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#datasets Datasets]
 +
** [http://github.com/timzhang642/3D-Machine-Learning#3d_models 3D Models]
 +
** [http://github.com/timzhang642/3D-Machine-Learning#3d_scenes 3D Scenes]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#pose_estimation 3D Pose Estimation]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#single_classification Single Object Classification]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#multiple_detection Multiple Objects Detection]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#segmentation Scene/Object Semantic Segmentation]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#3d_synthesis 3D Geometry Synthesis/Reconstruction]
 +
** [http://github.com/timzhang642/3D-Machine-Learning#3d_synthesis_model_based Parametric Morphable Model-based methods]
 +
** [http://github.com/timzhang642/3D-Machine-Learning#3d_synthesis_template_based Part-based Template Learning methods]
 +
** [http://github.com/timzhang642/3D-Machine-Learning#3d_synthesis_dl_based Deep Learning Methods]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#material_synthesis Texture/Material Analysis and Synthesis]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#style_transfer Style Learning and Transfer]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#scene_synthesis Scene Synthesis/Reconstruction]
 +
* [http://github.com/timzhang642/3D-Machine-Learning#scene_understanding Scene Understanding]
  
  

Revision as of 13:19, 28 July 2019

Youtube search...


Geometric Deep Learning

3D Machine Learning | GitHub


3D Models from 2D Images

3DCNN

Schematic-diagram-of-the-Deep-3D-Convolutional-Neural-Network-and-FEATURE-Softmax.png

Schematic diagram of the Deep 3D Convolutional Neural Network and FEATURE-Softmax Classifier models. a Deep 3D Convolutional Neural Network. The feature extraction stage includes 3D convolutional and Pooling / Sub-sampling: Max, Mean layers. 3D filters in the 3D convolutional layers search for recurrent spatial patterns that best capture the local biochemical features to separate the 20 amino acid microenvironments. Pooling / Sub-sampling: Max, Mean layers perform down-sampling to the input to increase translational invariances of the network. By following the 3DCNN and 3D Pooling / Sub-sampling: Max, Mean layers with fully connected layers, the pooled filter responses of all filters across all positions in the protein box can be integrated. The integrated information is then fed to the Softmax classifier layer to calculate class probabilities and to make the final predictions. Prediction error drives parameter updates of the trainable parameters in the classifier, fully connected layers, and convolutional filters to learn the best feature for the optimal performances. b The FEATURE Softmax Classifier. The FEATURE Softmax model begins with an input layer, which takes in FEATURE vectors, followed by two fully-connected layers, and ends with a Softmax classifier layer. In this case, the input layer is equivalent to the feature extraction stage. In contrast to 3DCNN, the prediction error only drives parameter learning of the fully connected layers and classifier. The input feature is fixed during the whole training process

3D Printing