•Proportional Editing: With proportional editing turned on, manipulations affect geometry in a radius (that can be changed numerically and usually adjusted with the mouse wheel or pgup/pgdwn, arrow keys, whatever) around the point, edge, or face that's being moved, scaled, or rotated. The sphere of influence typically has a falloff curve. (Raxx)
•Manipulate based on center of selection: Option to scale or rotate based on the selection's center rather than the mesh's pivot point (polyGon_tError)
As for your #1 request, it can be done via ASL and would probably be faster implemented if you requested it in the scripts request thread in the ASL board If you still want it in the feature requests, let me know.
With regards "Morph targets in sequence editor": one slick way of pulling this off is by having a morph triggered by a bone motion. For example, a morph can be set whereby if a person's forearm is raised the bicep slides up the upper arm, proportionally.Did you want that as an added suggestion? As for the combination of the editors, I think it'd be better to combine just the figure and sequence editors. That way you can tweak vertex weights while animating, among other things, and still have a relatively clutter-free interface. Tossing the object editor into the mix seems like it'd require a major overhaul of just about everything, requiring the UI to be as complicated as blender, max, maya, etc.
kreator, so did you want something like F1 to go to the previous frame, and F2 to advance to the next frame? I think it'd make better sense to remap them to the [ and ] keys, or the , and . ones.
If you use multiple cameras, the backgrounds are never rendered with the other cameras, so you are limited to using just camera1 in any animations.
QuoteIf you use multiple cameras, the backgrounds are never rendered with the other cameras, so you are limited to using just camera1 in any animations.
If the camera is keyed as active, the background image does render in that camera. (http://s6.postimg.org/9pdpaahst/whtwink.gif)
IS there a toon-render coming?I'm looking into some better form of toon shading - still not sure exactly what it will be but something that works for both the ray tracer and for OpenGL.
And something that renders shadows for 2-D projectsI'm not quite sure what your asking. Can you explain a bit more?
I'm adding a post-render pass to help with things like toon shading. It shouldn't be too difficult to do things like Bloom lighting effects. What other dynamic lighting features would you like?
But when I do that, it doesn't show any light! We'll take this to another thread so we don't clog this one up.
+1 Rendering Speed.
No, I don't know of a physics engine.
live preview in figure mode (have a viewport playing a sequence to be able to spot deformation)
if you could pause the preview and paint weights on the messed up part of the preview and not be forgetting which vertices are wrong
d-Anim: Adding a grid to perspective views is ambiguous unless it's tied to a particular plane, such as the ground plane. Is that what you wanted?
HRESULT SetSamplerState(
[in] DWORD Sampler,
[in] D3DSAMPLERSTATETYPE Type, // D3DSAMP_ADDRESSU, D3DSAMP_ADDRESSV (U,V sometimes known as S,T)
[in] DWORD Value //D3DTADDRESS_WRAP, D3DTADDRESS_CLAMP, D3DTADDRESS_MIRROR
);
// For this example, device is a valid Device object.
//
using System;
using Microsoft.DirectX.Direct3D;
// Load a texture.
Texture tx = new Texture(device, 4, 4, 0, 0, Format.X8R8G8B8, Pool.Managed);
// Set the texture in stage 0.
device.SetTexture(0, tx);
// Set some sampler states.
device.SamplerState[0].AddressU = TextureAddress.Clamp; //CLAMPS U
device.SamplerState[0].AddressV = TextureAddress.Mirror; //Mirrors V
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); //Normal
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); //Normal Clamp and smear Horizontal
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT); //mirror Vertical
+1 to animating bone length+2!
(This pretty much falls under the "Free Node System" suggestion: Upgrade the rigid bone system into flexible nodes and constraints like in other animation systems, where they are easily movable, scale-able, etc.)
Yes but bone length would be easier to implement, and would be more compatible with older files
jointangle { "bone01" "Y"
track {
floatkey { 0 0 10 10 "S" }
}
}
jointangle { "bone01" "Y"
track {
floatkey { 0 0 10 10 "S" }
}
}
jointscale { "bone01" "Y"
track {
floatkey { 0 0 10 10 "S" }
}
}
jointtrans { "bone01" "Y"
track {
floatkey { 0 0 10 10 "S" }
}
}
animating bone length should translate vertices along the bone, kind of like they're attached to a hydraulic cylinder. scaling could be a separate tool
so: if you scaled a bone, it would move the child bones and scale the vertices?It would offset and scale the child bones, unless they are configured not to inherit scale (a common option in animation software).
changing length: vertices near the root will not move much, those near the tip will scale more or less as the bone scales. Any change is in the direction of the bone.
changing scale: vertices scale away from or towards the bone. It's not clear where the center of the scaling should be - the base of the bone, the center?
Also the option to be connected/disconnected, which means that when connected, there's no way to move the bone away from its parent bone (the base is stuck to the parent's head). When disconnected it can be moved away. This option becomes important not just for normal animation, but also later on when constraints are implemented.In Blender, this option is there mainly for historical reasons. However, it still matters for auto-IK, which acts only on "connected" chains. If the Blender team were to revise the auto-IK implementation to allow configurable chain length, then the "Connected" option would become redundant, because translation can be locked anyway.
Also the option to be connected/disconnected, which means that when connected, there's no way to move the bone away from its parent bone (the base is stuck to the parent's head). When disconnected it can be moved away. This option becomes important not just for normal animation, but also later on when constraints are implemented.In Blender, this option is there mainly for historical reasons. However, it still matters for auto-IK, which acts only on "connected" chains. If the Blender team were to revise the auto-IK implementation to allow configurable chain length, then the "Connected" option would become redundant, because translation can be locked anyway.
Assuming the root of the bone is the origin and the bone moves in the positive z direction, the obvious center for x and y scaling would be the line where x and y equal 0, i.e. away from the center of the bone. The trick is what to do with vertices that are influenced by the bone that have a negative z value. Do they stretch behind the bone? That could throw the joint off. Do they not stretch? That could throw off the proportion of the limb. Do you take the most negative z value vertex influenced by the bone as the 0-stretch point and stretch out from there? That might cause joint problems as well. I don't have a good answer, I think you'd have to experiment to see which works best.
The trick is what to do with vertices that are influenced by the bone that have a negative z value.Vertices simply follow the transformations of the bones that influence them, in accordance with the influence factors (weights). If you're giving a bone a negative scale, it's only expected that the vertices will go along.
Assuming the root of the bone is the origin and the bone moves in the positive z direction, the obvious center for x and y scaling would be the line where x and y equal 0, i.e. away from the center of the bone.A bone should have only one transformation. It's hard enough to get it right with one coordinate system per bone, what with parenting and constraints. Having multiple transformations per bone is just wrong.
What would be the use case for scaling?A rig with squash-and-stretch capabilities.
I wish I could "walk forward turn and back"If you mean you want your character or camera to "walk forward turn and back", then that's already totally possible.
I'd really like to be able to lock the knife tools (like I can with the add edge tool), so I can only cut in one direction. :-)
Sluggs: grid snap will work with control handles as well in the next drop.
... it's the lines on the resultant subdivision mesh that would be better being gray and able to be disabled.
This concept, or a very similar one, is called Instancing in OpenGL and D3D. At first glance it may seem simple, but there are a lot of nuances. If you allow editing of the per face properties in an instance (materials, texture coordinates, etc.), updating then when the geometry of the base shape changes is very complex. Vertex, edge, face, normal, tex-coord numbers often change throughout the mesh when just one small part is edited. This is the same, fundamental complexity with mirrored meshes.As I described in the post above, clones shouldn't be editable meshes unless they're converted into their own independant shapes (and thus not being clones/instances anymore), so there wouldn't be any editing of materials (besides the base material), texture coords, or anything else p/e/f on these active instances. Thus this complexity is avoided. So yes, what you described as fairly easy to support is what I described, and therefore useful enough ;)
What would be fairly easy to support: position, orientation, scale, materials (substituting different ones for the ones in the base model), and all global properties shown in the Properties editors such as subdivision level, parametric parameters, layer, smooth angle, etc.
if($he_sharedPoints[$he_Point[$curHEString2[$j]]].x != -1.0) { if ($he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].x)]] != -1) $feature_neighbors[$he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].x)]]] = 1; if ($he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].x)]] != -1) $feature_neighbors[$he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].x)]]] = 1; if($he_sharedPoints[$he_Point[$curHEString2[$j]]].y != -1.0) { if ($he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].y)]] != -1) $feature_neighbors[$he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].y)]]] = 1; if ($he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].y)]] != -1) $feature_neighbors[$he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].y)]]] = 1; if($he_sharedPoints[$he_Point[$curHEString2[$j]]].z != -1.0) { if ($he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].z)]] != -1) $feature_neighbors[$he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].z)]]] = 1; if ($he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].z)]] != -1) $feature_neighbors[$he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].z)]]] = 1; if($he_sharedPoints[$he_Point[$curHEString2[$j]]].w != -1.0) { if ($he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].w)]] != -1) $feature_neighbors[$he_Pair1[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].w)]]] = 1; if ($he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].w)]] != -1) $feature_neighbors[$he_Pair2[$he_EdgeIndex[$fToInt($he_sharedPoints[$he_Point[$curHEString2[$j]]].w)]]] = 1; } } } }
!datatype half-edge (int id, int next, int prev, point3 point, int edgeindex, int opposite);
! indicates it's compiled first before the rest of the script, and needs to be asserted outside of the $main function. Then to use the half-edge data type, declare it like any other variable:half-edge $HE[1];
To assign the values, two methods can be used.$HE[0] = (0, 1, -1, (0,0,0), 85, 23); // Can also have variables passed as values
$HE[0].id = 0;
$HE[0].next = 1;
$HE[0].prev= -1;
$HE[0].point = (0,0,0);
$HE[0].edgeindex = 85;
$HE[0].opposite = 23;
It'd be useful to also be able to use custom data types within custom data types. For example a custom data type vertex:!datatype vertex (int id, point3 loc);
vertex $v;
Can be used in half-edge:!datatype half-edge (int id, int next, int prev, vertex point, int edgeindex, int opposite);
Writing to it can look like this:$HE[0].point.id = 1;
$HE[0].point.loc =(0,0,0);
'Screen-space rotate bones' can already be done with trackball, applying that also to the mouse buttons would badly reduce the mobility of the bones.-1
Hit 'o' for object space/coordinates.
Obviously, most of these requests are things found in regular programming languages. If instead an API/dll extension system was being worked on, I really couldn't care less about these requests and just wait for that to save myself all the headache :PSteve, is there a chance Anim8or might get an SDK?
//A Prompt Box String $alert1; $alert1 = MsgBox("Hello", "What is your name?", 2, "Bob"); // A Confirmation box if($alert1 != NULL) MsgBox("Please Confirm", PrintToString("That %s is reeeeeealllly your name", $alert1), 1); else // An alert box MsgBox("...", "How Rude!", 0);
should this not now be in V1 forum?
v0.98 has come to an end.
Trev
Why is anyone still using An8 0.98?
There are some CAD functions in 1.0.1.x but here I will second that they need some love too.
Trev