Skip to content

Instantly share code, notes, and snippets.

@kazu0617
Created November 22, 2025 13:59
Show Gist options
  • Select an option

  • Save kazu0617/b5e1d678e441492cabeaa8ca82713801 to your computer and use it in GitHub Desktop.

Select an option

Save kazu0617/b5e1d678e441492cabeaa8ca82713801 to your computer and use it in GitHub Desktop.
もちふぃったーの変換スクリプト。GPLでライセンスされた箇所について公開
# -*- coding: utf-8 -*-
"""
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
"""
import bpy
import os
import json
import math
import numpy as np
import argparse
import sys
import re
from mathutils import Matrix, Vector, Euler
from mathutils.kdtree import KDTree
from typing import Dict, Optional, Tuple, Set
from scipy.spatial import cKDTree
import bmesh
from mathutils.bvhtree import BVHTree
import mathutils
import time
from collections import deque, defaultdict
# グローバルキャッシュ辞書を追加
_mesh_cache = {}
# グローバル変数:ポーズ状態管理
_saved_pose_state = None
_previous_pose_state = None
_is_A_pose = False
def save_pose_state(armature_obj: bpy.types.Object) -> dict:
"""
アーマチュアの現在のポーズ状態を保存する
Parameters:
armature_obj: アーマチュアオブジェクト
Returns:
保存されたポーズ状態のディクショナリ
"""
if not armature_obj or armature_obj.type != 'ARMATURE':
return None
pose_state = {}
for bone in armature_obj.pose.bones:
pose_state[bone.name] = {
'matrix': bone.matrix.copy(),
'location': bone.location.copy(),
'rotation_euler': bone.rotation_euler.copy(),
'rotation_quaternion': bone.rotation_quaternion.copy(),
'scale': bone.scale.copy()
}
return pose_state
def restore_pose_state(armature_obj: bpy.types.Object, pose_state: dict) -> None:
"""
アーマチュアのポーズ状態を復元する
Parameters:
armature_obj: アーマチュアオブジェクト
pose_state: 復元するポーズ状態のディクショナリ
"""
if not armature_obj or armature_obj.type != 'ARMATURE' or not pose_state:
return
for bone_name, state in pose_state.items():
if bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
bone.matrix = state['matrix']
bone.location = state['location']
bone.rotation_euler = state['rotation_euler']
bone.rotation_quaternion = state['rotation_quaternion']
bone.scale = state['scale']
# ポーズの更新を強制
bpy.context.view_layer.update()
def save_shape_key_state(mesh_obj: bpy.types.Object) -> dict:
"""
メッシュオブジェクトのシェイプキー状態を保存する
Parameters:
mesh_obj: メッシュオブジェクト
Returns:
保存されたシェイプキー状態のディクショナリ
"""
if not mesh_obj or not mesh_obj.data.shape_keys:
return {}
shape_key_state = {}
for key_block in mesh_obj.data.shape_keys.key_blocks:
shape_key_state[key_block.name] = key_block.value
return shape_key_state
def restore_shape_key_state(mesh_obj: bpy.types.Object, shape_key_state: dict) -> None:
"""
メッシュオブジェクトのシェイプキー状態を復元する
Parameters:
mesh_obj: メッシュオブジェクト
shape_key_state: 復元するシェイプキー状態のディクショナリ
"""
if not mesh_obj or not mesh_obj.data.shape_keys or not shape_key_state:
return
for key_name, value in shape_key_state.items():
if key_name in mesh_obj.data.shape_keys.key_blocks:
mesh_obj.data.shape_keys.key_blocks[key_name].value = value
def apply_blend_shape_settings(mesh_obj: bpy.types.Object, blend_shape_settings: list, ignore_missing_shape_keys: bool = True) -> None:
"""
シェイプキー設定を適用する
Parameters:
mesh_obj: メッシュオブジェクト
blend_shape_settings: 適用するシェイプキー設定のリスト
"""
if not mesh_obj or not mesh_obj.data.shape_keys or not blend_shape_settings:
return False
for setting in blend_shape_settings:
shape_name = setting.get("name")
if shape_name not in mesh_obj.data.shape_keys.key_blocks:
temp_shape_key_name = f"{shape_name}_temp"
if temp_shape_key_name not in mesh_obj.data.shape_keys.key_blocks:
if not ignore_missing_shape_keys:
print(f"Required shape key does not exist: {shape_name}")
return False
for setting in blend_shape_settings:
shape_name = setting.get("name")
shape_value = setting.get("value", 0.0)
if shape_name in mesh_obj.data.shape_keys.key_blocks:
mesh_obj.data.shape_keys.key_blocks[shape_name].value = shape_value
print(f"Applied shape key setting: {shape_name} = {shape_value}")
else:
temp_shape_key_name = f"{shape_name}_temp"
if temp_shape_key_name in mesh_obj.data.shape_keys.key_blocks:
mesh_obj.data.shape_keys.key_blocks[temp_shape_key_name].value = shape_value
print(f"Applied shape key setting: {temp_shape_key_name} = {shape_value}")
return True
def store_pose_globally(armature_obj: bpy.types.Object) -> None:
"""
グローバル変数にポーズ状態を保存する
Parameters:
armature_obj: アーマチュアオブジェクト
"""
global _saved_pose_state
_saved_pose_state = save_pose_state(armature_obj)
def restore_global_pose(armature_obj: bpy.types.Object) -> None:
"""
グローバル変数からポーズ状態を復元する
Parameters:
armature_obj: アーマチュアオブジェクト
"""
global _saved_pose_state
if _saved_pose_state is not None:
restore_pose_state(armature_obj, _saved_pose_state)
def store_current_pose_as_previous(armature_obj: bpy.types.Object) -> None:
"""
現在のポーズをpreviousポーズとして保存する
Parameters:
armature_obj: アーマチュアオブジェクト
"""
global _previous_pose_state
_previous_pose_state = save_pose_state(armature_obj)
def restore_previous_pose(armature_obj: bpy.types.Object) -> None:
"""
previousポーズを復元する
Parameters:
armature_obj: アーマチュアオブジェクト
"""
global _previous_pose_state
if _previous_pose_state is not None:
restore_pose_state(armature_obj, _previous_pose_state)
def load_avatar_data_for_blendshape_analysis(avatar_data_path: str) -> dict:
"""
BlendShape分析用にアバターデータを読み込む
Parameters:
avatar_data_path: アバターデータファイルのパス
Returns:
dict: アバターデータ
"""
try:
with open(avatar_data_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
print(f"Error loading avatar data {avatar_data_path}: {e}")
return {}
def get_blendshape_groups(avatar_data: dict) -> dict:
"""
アバターデータからBlendShapeGroupsを取得する
Parameters:
avatar_data: アバターデータ
Returns:
dict: BlendShapeGroup名をキーとし、そのグループに含まれるBlendShape名のリストを値とする辞書
"""
groups = {}
blend_shape_groups = avatar_data.get('blendShapeGroups', [])
for group in blend_shape_groups:
group_name = group.get('name', '')
blend_shape_fields = group.get('blendShapeFields', [])
groups[group_name] = blend_shape_fields
return groups
def get_deformation_fields_mapping(avatar_data: dict) -> tuple:
"""
アバターデータからBlendShapeの変形フィールドマッピングを取得する
Parameters:
avatar_data: アバターデータ
Returns:
tuple: (blendShapeFields, invertedBlendShapeFields) のマッピング辞書のタプル
"""
blend_shape_fields = {}
inverted_fields = {}
# blendShapeFieldsから取得
for field in avatar_data.get('blendShapeFields', []):
label = field.get('label', '')
if label:
blend_shape_fields[label] = field
# invertedBlendShapeFieldsから取得
for field in avatar_data.get('invertedBlendShapeFields', []):
label = field.get('label', '')
if label:
inverted_fields[label] = field
return blend_shape_fields, inverted_fields
def load_deformation_field_num_steps(field_file_path: str, config_dir: str) -> int:
"""
変形フィールドファイルからnum_stepsを読み込む
Parameters:
field_file_path: 変形フィールドファイルのパス(相対パス可)
config_dir: 設定ファイルのディレクトリ
Returns:
int: num_stepsの値、読み込めない場合は1
"""
try:
# 相対パスの場合は絶対パスに変換
if not os.path.isabs(field_file_path):
field_file_path = os.path.join(config_dir, field_file_path)
if os.path.exists(field_file_path):
field_data = np.load(field_file_path, allow_pickle=True)
return int(field_data.get('num_steps', 1))
else:
print(f"Warning: Deformation field file not found: {field_file_path}")
return 1
except Exception as e:
print(f"Warning: Failed to load num_steps from {field_file_path}: {e}")
return 1
def process_single_blendshape_transition_set(current_settings: list, next_settings: list,
label: str, source_label: str, blend_shape_groups: dict,
blend_shape_fields: dict, inverted_blend_shape_fields: dict,
current_config_dir: str, mask_bones: list = None) -> dict:
"""
単一のBlendShape設定セット間の遷移を処理する
Parameters:
current_settings: 現在の設定リスト
next_settings: 次の設定リスト
label: ラベル名('Basis'または具体的なblendShapeFieldsラベル)
blend_shape_groups: BlendShapeGroupsの辞書
blend_shape_fields: BlendShapeFieldsの辞書
inverted_blend_shape_fields: invertedBlendShapeFieldsの辞書
current_config_dir: 現在の設定ファイルのディレクトリ
Returns:
list: 遷移データのリスト
"""
# 設定を辞書形式に変換
current_dict = {item['name']: item['value'] for item in current_settings}
next_dict = {item['name']: item['value'] for item in next_settings}
# すべてのBlendShape名を収集
all_blend_shapes = set(current_dict.keys()) | set(next_dict.keys())
transitions = []
processed_blend_shapes = set()
for blend_shape_name in all_blend_shapes:
if blend_shape_name in processed_blend_shapes:
continue
current_value = current_dict.get(blend_shape_name, 0.0)
next_value = next_dict.get(blend_shape_name, 0.0)
# 値に変化がある場合のみ処理
if current_value != next_value:
transition = {
'label': label,
'blend_shape_name': blend_shape_name,
'from_value': current_value,
'to_value': next_value,
'operations': [],
}
# BlendShapeGroupsでの特別処理
group_processed = False
for group_name, group_blend_shapes in blend_shape_groups.items():
if blend_shape_name in group_blend_shapes:
# グループ内の現在の非ゼロ値を探す
current_non_zero = None
for group_blend_shape in group_blend_shapes:
if current_dict.get(group_blend_shape, 0.0) != 0.0:
current_non_zero = group_blend_shape
break
# グループ内の次の非ゼロ値を探す
next_non_zero = None
for group_blend_shape in group_blend_shapes:
if next_dict.get(group_blend_shape, 0.0) != 0.0:
next_non_zero = group_blend_shape
break
# グループ内で異なるBlendShapeが正の値をとる場合
if current_non_zero and next_non_zero and current_non_zero != next_non_zero:
# 最初に前の値を0にする操作
field_file_path = inverted_blend_shape_fields[current_non_zero]['filePath']
num_steps = load_deformation_field_num_steps(field_file_path, current_config_dir)
current_value = current_dict.get(current_non_zero, 0.0)
from_step = int((1.0 - current_value) * num_steps + 0.5)
to_step = num_steps
transition['operations'].append({
'type': 'set_to_zero',
'blend_shape': current_non_zero,
'from_value': current_value,
'to_value': 0.0,
'file_path': os.path.join(current_config_dir, field_file_path),
'mask_bones': inverted_blend_shape_fields[current_non_zero]['maskBones'],
'num_steps': num_steps,
'from_step': from_step,
'to_step': to_step,
'field_type': 'inverted'
})
# 次に新しい値を設定する操作
field_file_path = blend_shape_fields[next_non_zero]['filePath']
num_steps = load_deformation_field_num_steps(field_file_path, current_config_dir)
next_value = next_dict.get(next_non_zero, 0.0)
from_step = 0
to_step = int(next_value * num_steps + 0.5)
transition['operations'].append({
'type': 'set_value',
'blend_shape': next_non_zero,
'from_value': 0.0,
'to_value': next_value,
'file_path': os.path.join(current_config_dir, field_file_path),
'mask_bones': blend_shape_fields[next_non_zero]['maskBones'],
'num_steps': num_steps,
'from_step': from_step,
'to_step': to_step,
'field_type': 'normal'
})
group_processed = True
processed_blend_shapes.add(current_non_zero)
processed_blend_shapes.add(next_non_zero)
break
# グループ処理がされなかった場合は単純な値の変更として記録
if not group_processed:
if current_value > next_value:
# 値の減少
field_file_path = inverted_blend_shape_fields[blend_shape_name]['filePath']
num_steps = load_deformation_field_num_steps(field_file_path, current_config_dir)
from_step = int((1.0 - current_value) * num_steps + 0.5)
to_step = int((1.0 - next_value) * num_steps + 0.5)
transition['operations'].append({
'type': 'decrease',
'blend_shape': blend_shape_name,
'from_value': current_value,
'to_value': next_value,
'file_path': os.path.join(current_config_dir, field_file_path),
'mask_bones': inverted_blend_shape_fields[blend_shape_name]['maskBones'],
'num_steps': num_steps,
'from_step': from_step,
'to_step': to_step,
'field_type': 'inverted'
})
else:
# 値の増加
field_file_path = blend_shape_fields[blend_shape_name]['filePath']
num_steps = load_deformation_field_num_steps(field_file_path, current_config_dir)
from_step = int(current_value * num_steps + 0.5)
to_step = int(next_value * num_steps + 0.5)
transition['operations'].append({
'type': 'increase',
'blend_shape': blend_shape_name,
'from_value': current_value,
'to_value': next_value,
'file_path': os.path.join(current_config_dir, field_file_path),
'mask_bones': blend_shape_fields[blend_shape_name]['maskBones'],
'num_steps': num_steps,
'from_step': from_step,
'to_step': to_step,
'field_type': 'normal'
})
processed_blend_shapes.add(blend_shape_name)
transitions.append(transition)
print(f" Transition detected [{label}]: {blend_shape_name} {current_value} -> {next_value}")
transition_set = {
'label': label,
'source_label': source_label, # 選ばれたtargetBlendShapeSettingsのlabelを記録
'mask_bones': mask_bones,
'current_settings': current_settings,
'next_settings': next_settings,
'transitions': transitions
}
return transition_set
def apply_blendshape_operation_with_shape_key_name(target_obj, operation, target_shape_key_name, rigid_transformation=False):
target_shape_key = target_obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if target_shape_key is None:
print(f"Shape key {target_shape_key_name} not found")
return
original_shape_key_state = save_shape_key_state(target_obj)
#すべてのシェイプキーの値を0にする
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
target_shape_key.value = 1.0
apply_blendshape_operation(target_obj, operation, target_shape_key, rigid_transformation)
restore_shape_key_state(target_obj, original_shape_key_state)
def apply_blendshape_operation(target_obj, operation, target_shape_key, rigid_transformation=False):
"""
単一のBlendShape遷移を指定されたオブジェクトに適用する
Parameters:
target_obj: 対象メッシュオブジェクト
transition: 遷移データ
target_shape_key: 適用先のシェイプキー名 (Noneの場合はBasisに適用)
"""
try:
armature_obj = get_armature_from_modifier(target_obj)
field_file_path = operation['file_path']
num_steps = operation['num_steps']
from_step = operation['from_step']
to_step = operation['to_step']
field_type = operation['field_type']
print(f"Applying operation: {operation['blend_shape']} "
f"({operation['from_value']} -> {operation['to_value']}) "
f"steps {from_step}->{to_step}/{num_steps}")
if not os.path.exists(field_file_path):
print(f"Warning: Deformation field file not found: {field_file_path}")
return
# ステップ間の変換を計算
if from_step == to_step:
print("No step change required")
return
# 現在のオブジェクトの頂点位置を取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
vertices = np.array([v.co for v in eval_mesh.vertices])
num_vertices = len(vertices)
# ③ メインの Deformation Field 情報を取得
field_info = get_deformation_field_multi_step(field_file_path)
all_field_points = field_info['all_field_points']
all_delta_positions = field_info['all_delta_positions']
deform_weights = field_info['field_weights']
field_matrix = field_info['world_matrix']
field_matrix_inv = field_info['world_matrix_inv']
k_neighbors = field_info['kdtree_query_k']
# もしdeform_weightsがNoneの場合は、全ての頂点のウェイトを1.0とする
if deform_weights is None:
deform_weights = np.ones(num_vertices)
from_value = operation['from_value']
to_value = operation['to_value']
if field_type == 'inverted':
from_value = 1.0 - from_value
to_value = 1.0 - to_value
if from_value < 0.00001:
from_value = 0.0
if to_value < 0.00001:
to_value = 0.0
if from_value > 0.99999:
from_value = 1.0
if to_value > 0.99999:
to_value = 1.0
# カスタムレンジ処理を使用
world_positions = batch_process_vertices_with_custom_range(
vertices,
all_field_points,
all_delta_positions,
deform_weights,
field_matrix,
field_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
from_value,
to_value,
deform_weights=deform_weights,
batch_size=1000,
k=k_neighbors
)
if rigid_transformation:
# numpy配列に変換
source_points = np.array([target_obj.matrix_world @ Vector(v) for v in vertices])
s, R, t = calculate_optimal_similarity_transform(source_points, world_positions)
# 相似変換を適用した結果を計算
world_positions = apply_similarity_transform_to_points(source_points, s, R, t)
# 結果を適用
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(target_obj.data.vertices)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(world_positions[i])
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
target_shape_key.data[i].co = local_pos
matrix_armature_inv_fallback = matrix_armature_inv
return target_shape_key
except Exception as e:
print(f"Error applying operation {operation['blend_shape']}: {e}")
import traceback
traceback.print_exc()
def get_source_label(transition_label: str, config_data: Optional[dict]) -> Optional[str]:
if config_data is None:
return None
transition_sets = config_data.get('blend_shape_transition_sets', [])
if not transition_sets:
return None
for transition_set in transition_sets:
if transition_set.get('label', '') == transition_label:
return transition_set.get('source_label', '')
return None
def calculate_blendshape_settings_difference(settings1: list, settings2: list,
blend_shape_fields: dict,
config_dir: str) -> float:
"""
BlendShapeSettings間の状態差異を計算する
Parameters:
settings1: 最初のBlendShapeSettings
settings2: 次のBlendShapeSettings
blend_shape_fields: BlendShapeFieldsの辞書
config_dir: 設定ファイルのディレクトリ
Returns:
float: 差異の量
"""
# 設定を辞書形式に変換
dict1 = {item['name']: item['value'] for item in settings1}
dict2 = {item['name']: item['value'] for item in settings2}
# すべてのBlendShape名を収集
all_blend_shapes = set(dict1.keys()) | set(dict2.keys())
total_difference = 0.0
for blend_shape_name in all_blend_shapes:
value1 = dict1.get(blend_shape_name, 0.0)
value2 = dict2.get(blend_shape_name, 0.0)
# 値の差の絶対値
value_diff = abs(value1 - value2)
if value_diff > 0.0 and blend_shape_name in blend_shape_fields:
# 変形データのファイルパスを取得
field_file_path = blend_shape_fields[blend_shape_name]['filePath']
full_field_path = os.path.join(config_dir, field_file_path)
try:
# 変形データを読み込み
data = np.load(full_field_path, allow_pickle=True)
delta_positions = data['all_delta_positions']
total_max_displacement = 0.0
for i in range(len(delta_positions)):
max_displacement = np.max(np.linalg.norm(delta_positions[i], axis=1))
total_max_displacement += max_displacement
# if len(delta_positions) > 0:
# first_step_deltas = delta_positions[0]
# max_displacement = np.max(np.linalg.norm(first_step_deltas, axis=1))
# 差の絶対値に最大変位を掛けて加算
total_difference += value_diff * total_max_displacement
except Exception as e:
print(f"Warning: Could not load deformation data for {blend_shape_name}: {e}")
# データを読み込めない場合は値の差をそのまま使用
total_difference += value_diff
return total_difference
def find_best_matching_target_settings(source_label: str,
all_target_settings: dict,
all_target_mask_bones: dict,
source_settings: list,
blend_shape_fields: dict,
config_dir: str,
mask_bones: list = None) -> tuple:
"""
sourceBlendShapeSettingsに最も近いtargetBlendShapeSettingsを見つける
Parameters:
all_target_settings: ラベルごとのtargetBlendShapeSettingsの辞書
all_target_mask_bones: ラベルごとのmaskBonesの辞書
source_settings: sourceBlendShapeSettings
blend_shape_fields: BlendShapeFieldsの辞書
config_dir: 設定ファイルのディレクトリ
mask_bones: 比較対象のmaskBones
Returns:
tuple: (best_label, best_target_settings)
"""
best_label = None
best_target_settings = None
min_difference = float('inf')
for label, target_settings in all_target_settings.items():
# mask_bonesとall_target_mask_bones[label]の間に共通要素があるかチェック
if mask_bones is not None and label in all_target_mask_bones:
target_mask_bones = all_target_mask_bones[label]
if target_mask_bones is not None:
# setに変換して共通要素をチェック
mask_bones_set = set(mask_bones)
target_mask_bones_set = set(target_mask_bones)
# 共通要素がない場合はスキップ
if not mask_bones_set.intersection(target_mask_bones_set):
print(f"label: {label} - skip: no common mask_bones")
continue
difference = calculate_blendshape_settings_difference(
target_settings, source_settings, blend_shape_fields, config_dir
)
# labelとsource_labelから___idを取り除いて比較
label_without_id = label.split('___')[0] if '___' in label else label
source_label_without_id = source_label.split('___')[0] if '___' in source_label else source_label
# labelがsource_labelの場合は、差異を1.5で割り優先度を上げる
if label_without_id == source_label_without_id:
difference = difference / 1.5
print(f"label: {label} difference: {difference}")
if difference < min_difference:
min_difference = difference
best_label = label
best_target_settings = target_settings
return best_label, best_target_settings
def process_blendshape_transitions(current_config: dict, next_config: dict) -> None:
"""
連続する2つのConfigファイル間のBlendShape設定の差異を検出し、遷移データを作成する
Parameters:
current_config: 前のConfigファイルの設定
next_config: 後のConfigファイルの設定
"""
try:
blendshape_settings = next_config['config_data'].get('sourceBlendShapeSettings', [])
current_config['next_blendshape_settings'] = blendshape_settings
# 前のConfigのbaseAvatarDataPathからアバターデータを読み込み
current_base_avatar_data = load_avatar_data_for_blendshape_analysis(current_config['base_avatar_data'])
# BlendShapeGroupsとDeformationFieldsを取得
blend_shape_groups = get_blendshape_groups(current_base_avatar_data)
blend_shape_fields, inverted_blend_shape_fields = get_deformation_fields_mapping(current_base_avatar_data)
# 設定ファイルのディレクトリを取得
current_config_dir = os.path.dirname(os.path.abspath(current_config['config_path']))
print(f"Processing BlendShape transitions between {os.path.basename(current_config['config_path'])} and {os.path.basename(next_config['config_path'])}")
all_transition_sets = []
all_default_transition_sets = []
# 1. ルートレベルの処理
# 全てのtargetBlendShapeSettingsを収集
all_target_settings = {}
all_target_mask_bones = {}
# ルートレベルのtargetBlendShapeSettings
current_target_settings = current_config['config_data'].get('targetBlendShapeSettings', [])
all_target_settings['Basis'] = current_target_settings
all_target_mask_bones['Basis'] = None
# blendShapeFields内のtargetBlendShapeSettings
current_blend_shape_fields = current_config['config_data'].get('blendShapeFields', [])
for field in current_blend_shape_fields:
field_label = field.get('label', '')
field_target_settings = field.get('targetBlendShapeSettings', [])
all_target_settings[field_label] = field_target_settings
all_target_mask_bones[field_label] = field.get('maskBones', [])
# next_configのsourceBlendShapeSettingsに最も近いtargetBlendShapeSettingsを見つける
next_source_settings = next_config['config_data'].get('sourceBlendShapeSettings', [])
if all_target_settings:
best_label, best_target_settings = find_best_matching_target_settings(
'Basis', all_target_settings, all_target_mask_bones, next_source_settings, blend_shape_fields, current_config_dir, None
)
print(f" Best matching target for root level: '{best_label}'")
# 最適なtargetBlendShapeSettingsとsourceBlendShapeSettingsの遷移を作成
basis_transitions = process_single_blendshape_transition_set(
best_target_settings, next_source_settings, 'Basis', best_label,
blend_shape_groups, blend_shape_fields, inverted_blend_shape_fields,
current_config_dir
)
all_transition_sets.append(basis_transitions)
basis_default_transitions = process_single_blendshape_transition_set(
all_target_settings['Basis'], next_source_settings, 'Basis', 'Basis',
blend_shape_groups, blend_shape_fields, inverted_blend_shape_fields,
current_config_dir
)
all_default_transition_sets.append(basis_default_transitions)
# 2. blendShapeFields内の処理
next_blend_shape_fields = next_config['config_data'].get('blendShapeFields', [])
for next_field in next_blend_shape_fields:
next_field_source_label = next_field.get('sourceLabel', '')
next_field_source_settings = next_field.get('sourceBlendShapeSettings', [])
next_field_mask_bones = next_field.get('maskBones', [])
if all_target_settings:
# 最適なtargetBlendShapeSettingsを見つける
best_label, best_target_settings = find_best_matching_target_settings(
next_field_source_label, all_target_settings, all_target_mask_bones, next_field_source_settings, blend_shape_fields, current_config_dir, next_field_mask_bones
)
print(f" Best matching target for field '{next_field_source_label}': '{best_label}'")
# 遷移を作成
field_transitions = process_single_blendshape_transition_set(
best_target_settings, next_field_source_settings, next_field_source_label, best_label,
blend_shape_groups, blend_shape_fields, inverted_blend_shape_fields,
current_config_dir,
next_field_mask_bones
)
all_transition_sets.append(field_transitions)
default_target_setting = None
if next_field_source_label in all_target_settings.keys():
default_target_setting = all_target_settings[next_field_source_label]
if default_target_setting is not None:
field_default_transitions = process_single_blendshape_transition_set(
default_target_setting, next_field_source_settings, next_field_source_label, next_field_source_label,
blend_shape_groups, blend_shape_fields, inverted_blend_shape_fields,
current_config_dir,
next_field_mask_bones
)
all_default_transition_sets.append(field_default_transitions)
# 遷移データを次のconfigオブジェクトに挿入
current_config['config_data']['blend_shape_transition_sets'] = all_transition_sets
current_config['config_data']['blend_shape_default_transition_sets'] = all_default_transition_sets
print(f" Total transition sets: {len(all_transition_sets)}")
print(f" Total default transition sets: {len(all_default_transition_sets)}")
except Exception as e:
print(f"Error processing BlendShape transitions: {e}")
import traceback
traceback.print_exc()
def parse_args():
parser = argparse.ArgumentParser()
# 既存の引数
parser.add_argument('--input', required=True, help='Input clothing FBX file path')
parser.add_argument('--output', required=True, help='Output FBX file path')
parser.add_argument('--base', required=True, help='Base Blender file path')
parser.add_argument('--base-fbx', required=True, help='Comma-separated list of base avatar FBX file paths')
parser.add_argument('--config', required=True, help='Comma-separated list of config file paths')
parser.add_argument('--hips-position', type=str, help='Target Hips bone world position (x,y,z format)')
parser.add_argument('--blend-shapes', type=str, help='Comma-separated list of blend shape labels to apply')
parser.add_argument('--cloth-metadata', type=str, help='Path to cloth metadata JSON file')
parser.add_argument('--mesh-material-data', type=str, help='Path to mesh material data JSON file')
parser.add_argument('--init-pose', required=True, help='Initial pose data JSON file path')
parser.add_argument('--target-meshes', required=False, help='Comma-separated list of mesh names to process')
parser.add_argument('--no-subdivision', action='store_true', help='Disable subdivision during DeformationField deformation')
parser.add_argument('--no-triangle', action='store_true', help='Disable mesh triangulation')
parser.add_argument('--blend-shape-values', type=str, help='Comma-separated list of float values for blend shape intensities')
parser.add_argument('--blend-shape-mappings', type=str, help='Semicolon-separated mappings of label,customName pairs')
parser.add_argument('--name-conv', type=str, help='Path to bone name conversion JSON file')
parser.add_argument('--mesh-renderers', type=str, help='Semicolon-separated list of meshObject,parentObject pairs')
print(sys.argv)
# Get all args after "--"
argv = sys.argv
if "--" not in argv:
parser.print_help()
sys.exit(1)
args = parser.parse_args(argv[argv.index("--") + 1:])
# Parse comma-separated base-fbx and config paths
base_fbx_paths = [path.strip() for path in args.base_fbx.split(',')]
config_paths = [path.strip() for path in args.config.split(',')]
# Validate that base-fbx and config have the same number of entries
if len(base_fbx_paths) != len(config_paths):
print(f"Error: Number of base-fbx files ({len(base_fbx_paths)}) must match number of config files ({len(config_paths)})")
sys.exit(1)
# Validate basic file paths
required_paths = [
args.input, args.base,
args.init_pose
]
for path in required_paths:
if not os.path.exists(path):
print(f"Error: File not found: {path}")
sys.exit(1)
# Validate all base-fbx files exist
for path in base_fbx_paths:
if not os.path.exists(path):
print(f"Error: Base FBX file not found: {path}")
sys.exit(1)
# Validate all config files exist
for path in config_paths:
if not os.path.exists(path):
print(f"Error: Config file not found: {path}")
sys.exit(1)
# Process each config file and create configuration pairs
config_pairs = []
for i, (base_fbx_path, config_path) in enumerate(zip(base_fbx_paths, config_paths)):
try:
with open(config_path, 'r', encoding='utf-8') as f:
config_data = json.load(f)
# blendShapeFieldsの重複するlabelとsourceLabelに___idを付加
if 'blendShapeFields' in config_data:
blend_shape_fields = config_data['blendShapeFields']
# labelの重複をチェックして___idを付加
label_counts = {}
for field in blend_shape_fields:
label = field.get('label', '')
if label:
label_counts[label] = label_counts.get(label, 0) + 1
label_ids = {}
for field in blend_shape_fields:
label = field.get('label', '')
if label and label_counts[label] > 1:
current_id = label_ids.get(label, 0)
field['label'] = f"{label}___{current_id}"
label_ids[label] = current_id + 1
# sourceLabelの重複をチェックして___idを付加
source_label_counts = {}
for field in blend_shape_fields:
source_label = field.get('sourceLabel', '')
if source_label:
source_label_counts[source_label] = source_label_counts.get(source_label, 0) + 1
source_label_ids = {}
for field in blend_shape_fields:
source_label = field.get('sourceLabel', '')
if source_label and source_label_counts[source_label] > 1:
current_id = source_label_ids.get(source_label, 0)
field['sourceLabel'] = f"{source_label}___{current_id}"
source_label_ids[source_label] = current_id + 1
# Get config file directory
config_dir = os.path.dirname(os.path.abspath(config_path))
# Extract and resolve avatar data paths
pose_data_path = config_data.get('poseDataPath')
field_data_path = config_data.get('fieldDataPath')
base_avatar_data_path = config_data.get('baseAvatarDataPath')
clothing_avatar_data_path = config_data.get('clothingAvatarDataPath')
if not pose_data_path:
print(f"Error: poseDataPath not found in config file: {config_path}")
sys.exit(1)
if not field_data_path:
print(f"Error: fieldDataPath not found in config file: {config_path}")
sys.exit(1)
if not base_avatar_data_path:
print(f"Error: baseAvatarDataPath not found in config file: {config_path}")
sys.exit(1)
if not clothing_avatar_data_path:
print(f"Error: clothingAvatarDataPath not found in config file: {config_path}")
sys.exit(1)
# Convert relative paths to absolute paths
if not os.path.isabs(pose_data_path):
pose_data_path = os.path.join(config_dir, pose_data_path)
if not os.path.isabs(field_data_path):
field_data_path = os.path.join(config_dir, field_data_path)
if not os.path.isabs(base_avatar_data_path):
base_avatar_data_path = os.path.join(config_dir, base_avatar_data_path)
if not os.path.isabs(clothing_avatar_data_path):
clothing_avatar_data_path = os.path.join(config_dir, clothing_avatar_data_path)
# Validate avatar data paths
if not os.path.exists(pose_data_path):
print(f"Error: Pose data file not found: {pose_data_path} (from config {config_path})")
sys.exit(1)
if not os.path.exists(field_data_path):
print(f"Error: Field data file not found: {field_data_path} (from config {config_path})")
sys.exit(1)
if not os.path.exists(base_avatar_data_path):
print(f"Error: Base avatar data file not found: {base_avatar_data_path} (from config {config_path})")
sys.exit(1)
if not os.path.exists(clothing_avatar_data_path):
print(f"Error: Clothing avatar data file not found: {clothing_avatar_data_path} (from config {config_path})")
sys.exit(1)
hips_position = None
target_meshes = None
init_pose = None
blend_shapes = None
blend_shape_values = None
blend_shape_mappings = None
mesh_renderers = None
input_clothing_fbx_path = args.output;
if i == 0:
if args.hips_position:
x, y, z = map(float, args.hips_position.split(','))
hips_position = Vector((x, y, z))
target_meshes = args.target_meshes;
init_pose = args.init_pose;
blend_shapes = args.blend_shapes;
# Parse blend shape values if provided
if args.blend_shape_values:
try:
blend_shape_values = [float(v.strip()) for v in args.blend_shape_values.split(',')]
except ValueError as e:
print(f"Error: Invalid blend shape values format: {e}")
sys.exit(1)
# Parse blend shape mappings if provided
if args.blend_shape_mappings:
try:
blend_shape_mappings = {}
pairs = args.blend_shape_mappings.split(';')
for pair in pairs:
if pair.strip():
label, custom_name = pair.split(',', 1)
blend_shape_mappings[label.strip()] = custom_name.strip()
except ValueError as e:
print(f"Error: Invalid blend shape mappings format: {e}")
sys.exit(1)
# Parse mesh renderers if provided
if args.mesh_renderers:
try:
mesh_renderers = {}
pairs = args.mesh_renderers.split(';')
for pair in pairs:
if pair.strip():
mesh_name, parent_name = pair.split(',', 1)
mesh_renderers[mesh_name.strip()] = parent_name.strip()
print(f"Parsed mesh renderers: {mesh_renderers}")
except ValueError as e:
print(f"Error: Invalid mesh renderers format: {e}")
sys.exit(1)
input_clothing_fbx_path = args.input;
skip_blend_shape_generation = True;
if i == len(config_paths) - 1:
skip_blend_shape_generation = False;
do_not_use_base_pose = config_data.get('doNotUseBasePose', 0);
# Create configuration pair
config_pair = {
'base_fbx': base_fbx_path,
'config_path': config_path,
'config_data': config_data,
'pose_data': pose_data_path,
'field_data': field_data_path,
'base_avatar_data': base_avatar_data_path,
'clothing_avatar_data': clothing_avatar_data_path,
'hips_position': hips_position,
'target_meshes': target_meshes,
'init_pose': init_pose,
'blend_shapes': blend_shapes,
'blend_shape_values': blend_shape_values,
'blend_shape_mappings': blend_shape_mappings,
'mesh_renderers': mesh_renderers,
'input_clothing_fbx_path': input_clothing_fbx_path,
'skip_blend_shape_generation': skip_blend_shape_generation,
'do_not_use_base_pose': do_not_use_base_pose
}
config_pairs.append(config_pair)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in config file {config_path}: {e}")
sys.exit(1)
except Exception as e:
print(f"Error reading config file {config_path}: {e}")
sys.exit(1)
# Process BlendShape transitions for consecutive config pairs
if len(config_pairs) >= 2:
for i in range(len(config_pairs) - 1):
process_blendshape_transitions(config_pairs[i], config_pairs[i + 1])
config_pairs[len(config_pairs) - 1]['next_blendshape_settings'] = config_pairs[len(config_pairs) - 1]['config_data'].get('targetBlendShapeSettings', [])
# Store configuration pairs in args for later use
args.config_pairs = config_pairs
# Parse hips position if provided
if args.hips_position:
try:
x, y, z = map(float, args.hips_position.split(','))
args.hips_position = Vector((x, y, z))
except:
print("Error: Invalid hips position format. Use x,y,z")
sys.exit(1)
return args
def load_base_file(filepath: str) -> None:
"""Load the base Blender file containing the character model."""
try:
bpy.ops.wm.open_mainfile(filepath=filepath)
except Exception as e:
raise Exception(f"Failed to load base file: {str(e)}")
def import_fbx(filepath: str) -> None:
"""Import an FBX file."""
try:
bpy.ops.import_scene.fbx(
filepath=filepath,
use_anim=False # アニメーションの読み込みを無効化
)
except Exception as e:
raise Exception(f"Failed to import FBX: {str(e)}")
def get_imported_armature() -> Optional[bpy.types.Object]:
"""Get the most recently imported armature object."""
for obj in bpy.data.objects:
if obj.type == 'ARMATURE' and obj.name != 'Armature.BaseAvatar':
return obj
return None
def load_avatar_data(filepath: str) -> dict:
"""Load and parse avatar data from JSON file."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
raise Exception(f"Failed to load avatar data: {str(e)}")
def import_base_fbx(filepath: str, automatic_bone_orientation: bool = False) -> None:
"""Import base avatar FBX file."""
try:
bpy.ops.import_scene.fbx(
filepath=filepath,
use_anim=False, # アニメーションの読み込みを無効化
automatic_bone_orientation=automatic_bone_orientation
)
except Exception as e:
raise Exception(f"Failed to import base FBX: {str(e)}")
def calculate_vertices_world(mesh_obj):
"""
変形後のメッシュの頂点のワールド座標を取得
Args:
mesh_obj: メッシュオブジェクト
Returns:
vertices_world: ワールド座標のnumpy配列
"""
# 変形後のメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_obj = mesh_obj.evaluated_get(depsgraph)
evaluated_mesh = evaluated_obj.data
# ワールド座標に変換(変形後の頂点位置を使用)
vertices_world = np.array([evaluated_obj.matrix_world @ v.co for v in evaluated_mesh.vertices])
return vertices_world
def find_closest_vertices_brute_force(positions, vertices_world, max_distance=0.0001):
"""
複数の位置に対して最も近い頂点を総当たりで探索
Args:
positions: 検索する位置のリスト(ワールド座標)
vertices_world: メッシュの頂点のワールド座標のリスト
max_distance: 許容する最大距離
Returns:
Dict[int, float]: 頂点インデックスをキーとし、距離を値とする辞書
"""
valid_mappings = {}
# 各検索位置について
for i, search_pos in enumerate(positions):
min_distance = float('inf')
closest_idx = None
# すべてのメッシュ頂点と距離を計算
for vertex_idx, vertex_pos in enumerate(vertices_world):
# ユークリッド距離を計算
distance = ((search_pos[0] - vertex_pos[0])**2 +
(search_pos[1] - vertex_pos[1])**2 +
(search_pos[2] - vertex_pos[2])**2)**0.5
# より近い頂点が見つかった場合は更新
if distance < min_distance:
min_distance = distance
closest_idx = vertex_idx
# 最大距離以内の場合のみマッピングを追加
if closest_idx is not None and min_distance < max_distance:
valid_mappings[i] = closest_idx
return valid_mappings
def load_mesh_material_data(filepath):
"""
メッシュマテリアルデータを読み込み、Blenderのメッシュにマテリアルを設定
Args:
filepath: メッシュマテリアルデータのJSONファイルパス
"""
if not filepath or not os.path.exists(filepath):
print("Warning: Mesh material data file not found or not specified")
return
try:
with open(filepath, 'r') as f:
data = json.load(f)
print(f"Loaded mesh material data from: {filepath}")
for mesh_data in data.get('meshMaterials', []):
mesh_name = mesh_data['meshName']
# Blenderでメッシュオブジェクトを検索
mesh_obj = None
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name == mesh_name:
mesh_obj = obj
break
if not mesh_obj:
print(f"Warning: Mesh {mesh_name} not found in Blender scene")
continue
print(f"Processing mesh: {mesh_name}")
# 各サブメッシュを処理
for sub_mesh_idx, sub_mesh_data in enumerate(mesh_data['subMeshes']):
material_name = sub_mesh_data['materialName']
faces_data = sub_mesh_data['faces']
if not faces_data:
continue
# マテリアルを作成または取得
material = bpy.data.materials.get(material_name)
if not material:
material = bpy.data.materials.new(name=material_name)
# デフォルトのマテリアル設定
material.use_nodes = True
print(f"Created material: {material_name}")
# 面から該当するマテリアルインデックスを特定し、そのスロットのマテリアルを入れ替え
material_index = find_material_index_from_faces(mesh_obj, faces_data)
if material_index is not None:
# メッシュにマテリアルスロットが不足している場合は追加
while len(mesh_obj.data.materials) <= material_index:
mesh_obj.data.materials.append(None)
# 該当するマテリアルスロットを入れ替え
mesh_obj.data.materials[material_index] = material
print(f"Replaced material at index {material_index} with {material_name}")
else:
print(f"Warning: Could not find matching faces for material {material_name}")
except Exception as e:
print(f"Error loading mesh material data: {e}")
def find_material_index_from_faces(mesh_obj, faces_data):
"""
面の頂点座標に基づいて該当する面を特定し、マッチした全ての面のマテリアルインデックスの中で
最も頻度が高いものを返す
Args:
mesh_obj: Blenderのメッシュオブジェクト
faces_data: Unityから来た面データのリスト
Returns:
int: 最も頻度が高いマテリアルインデックス(見つからない場合はNone)
"""
from collections import Counter
# オブジェクトモードであることを確認
bpy.context.view_layer.objects.active = mesh_obj
if bpy.context.object.mode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
# シーンの評価を最新の状態に更新
depsgraph = bpy.context.evaluated_depsgraph_get()
depsgraph.update()
mesh = mesh_obj.data
# ワールド変換行列を取得
world_matrix = mesh_obj.matrix_world
tolerance = 0.00001 # 座標の許容誤差
# マッチした面のマテリアルインデックスを記録
matched_material_indices = []
for face_data in faces_data:
# Unity座標をBlender座標に変換
unity_vertices = face_data['vertices']
blender_vertices = []
for unity_vertex in unity_vertices:
# Unity → Blender座標変換
blender_vertex = mathutils.Vector((
-unity_vertex['x'], # X軸反転
-unity_vertex['z'], # Y → Z
unity_vertex['y'] # Z → Y
))
blender_vertices.append(blender_vertex)
# Blenderの面を検索して一致するものを探す
for polygon in mesh.polygons:
if len(polygon.vertices) == 3: # 三角形面処理
# 面の頂点のワールド座標を取得
face_world_verts = []
for vert_idx in polygon.vertices:
vertex = mesh.vertices[vert_idx]
world_vert = world_matrix @ vertex.co
face_world_verts.append(world_vert)
# 3つの頂点すべてが近い位置にあるかチェック
match = True
for i in range(3):
closest_dist = min(
(face_world_verts[j] - blender_vertices[i]).length
for j in range(3)
)
if closest_dist > tolerance:
match = False
break
if match:
# マッチした面のマテリアルインデックスを記録
material_index = polygon.material_index
matched_material_indices.append(material_index)
print(f"Found matching triangular face with material index: {material_index}")
elif len(polygon.vertices) >= 4: # 多角形面処理
num_vertices = len(polygon.vertices)
# 面の頂点のワールド座標を取得
face_world_verts = []
for vert_idx in polygon.vertices:
vertex = mesh.vertices[vert_idx]
world_vert = world_matrix @ vertex.co
face_world_verts.append(world_vert)
# 4つの頂点から3つを選ぶ全ての組み合わせをチェック
from itertools import combinations
for face_vert_combo in combinations(range(num_vertices), 3):
# この組み合わせでマッチするかチェック
match = True
for i in range(3):
closest_dist = min(
(face_world_verts[face_vert_combo[j]] - blender_vertices[i]).length
for j in range(3)
)
if closest_dist > tolerance:
match = False
break
if match:
# マッチした組み合わせが見つかった
material_index = polygon.material_index
matched_material_indices.append(material_index)
print(f"Found matching face (num_vertices: {num_vertices}) with material index: {material_index}")
break # 同じ面の複数の組み合わせを重複カウントしないように
# マッチした面が見つからない場合
if not matched_material_indices:
return None
# 最も頻度が高いマテリアルインデックスを取得
material_counter = Counter(matched_material_indices)
most_common_material = material_counter.most_common(1)[0]
most_common_index = most_common_material[0]
most_common_count = most_common_material[1]
print(f"Material index frequencies: {dict(material_counter)}")
print(f"Most common material index: {most_common_index} (appears {most_common_count} times)")
return most_common_index
def load_cloth_metadata(filepath):
"""
変形後のワールド座標に基づいてClothメタデータをロード
Returns:
Tuple[dict, dict]: (メタデータのマッピング, Unity頂点インデックスからBlender頂点インデックスへのマッピング)
"""
if not filepath or not os.path.exists(filepath):
return {}, {}
try:
with open(filepath, 'r') as f:
data = json.load(f)
metadata_by_mesh = {}
vertex_index_mapping = {} # Unity頂点インデックス -> Blender頂点インデックスのマッピング
# シーンの評価を最新の状態に更新
depsgraph = bpy.context.evaluated_depsgraph_get()
depsgraph.update()
for metadata in data.get('clothMetadata', []):
mesh_name = metadata['meshName']
mesh_obj = None
# メッシュを探す
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name == mesh_name:
mesh_obj = obj
break
if not mesh_obj:
print(f"Warning: Mesh {mesh_name} not found")
continue
# Unityのワールド座標をBlenderのワールド座標に変換
unity_positions = []
max_distances = []
for i, vertex_data in enumerate(metadata.get('vertexData', [])):
pos = vertex_data['position']
# Unity座標系からBlender座標系への変換
unity_positions.append([
-pos['x'], # Unity X → Blender X
-pos['z'], # Unity Y → Blender Z
pos['y'] # Unity Z → Blender Y
])
max_distances.append(vertex_data['maxDistance'])
# 一括で最近接頂点を検索
vertices_world = calculate_vertices_world(mesh_obj)
vertex_mappings = find_closest_vertices_brute_force(
unity_positions,
vertices_world,
max_distance=0.0005
)
# 結果を頂点インデックスとmaxDistanceのマッピングに変換
vertex_max_distances = {}
mesh_vertex_mapping = {} # このメッシュのUnity -> Blenderマッピング
for unity_idx, blender_idx in sorted(vertex_mappings.items()):
if unity_idx is not None and blender_idx is not None:
vertex_max_distances[str(blender_idx)] = max_distances[unity_idx]
mesh_vertex_mapping[unity_idx] = blender_idx
metadata_by_mesh[mesh_name] = vertex_max_distances
vertex_index_mapping[mesh_name] = mesh_vertex_mapping
print(f"Processed {len(vertex_max_distances)} vertices for mesh {mesh_name}")
print(f"Original vertex count: {len(metadata['vertexData'])}")
print(f"Original unity position count: {len(unity_positions)}")
print(f"Mapped vertex count: {len(vertex_max_distances)}")
# マッピングできなかった頂点を特定
mapped_indices = set(int(idx) for idx in vertex_max_distances.keys())
unmapped_indices = set(range(len(vertices_world))) - mapped_indices
if unmapped_indices:
print(f"Warning: Could not map {len(unmapped_indices)} vertices")
# デバッグ用の頂点グループを作成
debug_group_name = "DEBUG_UnmappedVertices"
if debug_group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.remove(mesh_obj.vertex_groups[debug_group_name])
debug_group = mesh_obj.vertex_groups.new(name=debug_group_name)
# マッピングできなかった頂点をグループに追加
for idx in unmapped_indices:
debug_group.add([idx], 1.0, 'REPLACE')
print(f"Created vertex group '{debug_group_name}' with {len(unmapped_indices)} vertices")
# デバッグ情報
print(f"First 5 unmapped vertices world positions:")
for idx in list(unmapped_indices)[:5]:
print(f"Vertex {idx}: {vertices_world[idx]}")
return metadata_by_mesh, vertex_index_mapping
except Exception as e:
print(f"Failed to load cloth metadata: {e}")
import traceback
traceback.print_exc()
return {}, {}
def rename_base_objects(mesh_obj: bpy.types.Object, armature_obj: bpy.types.Object) -> tuple:
"""Rename base mesh and armature to specific names."""
# Store original names for reference
original_mesh_name = mesh_obj.name
original_armature_name = armature_obj.name
# Rename mesh to Body.BaseAvatar
mesh_obj.name = "Body.BaseAvatar"
mesh_obj.data.name = "Body.BaseAvatar_Mesh"
# Rename armature to Armature.BaseAvatar
armature_obj.name = "Armature.BaseAvatar"
armature_obj.data.name = "Armature.BaseAvatar_Data"
print(f"Renamed base objects: {original_mesh_name} -> {mesh_obj.name}, {original_armature_name} -> {armature_obj.name}")
return mesh_obj, armature_obj
def cleanup_base_objects(mesh_name: str) -> tuple:
"""Delete all objects except the specified mesh and its armature."""
original_mode = bpy.context.object.mode
bpy.ops.object.mode_set(mode='OBJECT')
# Find the mesh and its armature
target_mesh = None
target_armature = None
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name == mesh_name:
target_mesh = obj
# Find associated armature through modifiers
for modifier in obj.modifiers:
if modifier.type == 'ARMATURE':
target_armature = modifier.object
break
if not target_mesh:
raise Exception(f"Mesh '{mesh_name}' not found")
if target_armature and target_armature.parent:
original_active = bpy.context.view_layer.objects.active
bpy.context.view_layer.objects.active = target_armature
bpy.ops.object.parent_clear(type='CLEAR_KEEP_TRANSFORM')
bpy.context.view_layer.objects.active = original_active
# Delete all other objects
for obj in bpy.data.objects[:]: # Create a copy of the list to avoid modification during iteration
if obj != target_mesh and obj != target_armature:
bpy.data.objects.remove(obj, do_unlink=True)
bpy.ops.object.mode_set(mode=original_mode)
# Rename objects to specified names
return rename_base_objects(target_mesh, target_armature)
def apply_blendshape_values(mesh_obj: bpy.types.Object, blendshapes: list) -> None:
"""Apply blendshape values from avatar data."""
if not mesh_obj.data.shape_keys:
return
# Create a mapping of shape key names
shape_keys = mesh_obj.data.shape_keys.key_blocks
# Apply values
for blendshape in blendshapes:
shape_key_name = blendshape["name"]
if shape_key_name in shape_keys:
# Set value to 1% of the specified value
shape_keys[shape_key_name].value = blendshape["value"] * 0.01
def merge_auxiliary_bone_weights(mesh_obj: bpy.types.Object, auxiliary_bones_data: list) -> None:
"""
Merge auxiliary bone weights into their parent humanoid bone weights
Parameters:
mesh_obj: Mesh object to process
auxiliary_bones_data: List of auxiliary bones data from avatar data
"""
if not mesh_obj.vertex_groups:
return
# Process each auxiliary bone set
for aux_bone_set in auxiliary_bones_data:
humanoid_bone = aux_bone_set["humanoidBoneName"]
auxiliary_bones = aux_bone_set["auxiliaryBones"]
# Find the humanoid bone vertex group
humanoid_group = None
for bone in aux_bone_set.get("boneName", [humanoid_bone]):
humanoid_group = mesh_obj.vertex_groups.get(bone)
if humanoid_group:
break
if not humanoid_group:
print(f"Warning: Humanoid bone group '{humanoid_bone}' not found in {mesh_obj.name}")
continue
# Process each auxiliary bone
for aux_bone_name in auxiliary_bones:
aux_group = mesh_obj.vertex_groups.get(aux_bone_name)
if not aux_group:
continue
# For each vertex, add auxiliary bone weight to humanoid bone
for vert in mesh_obj.data.vertices:
aux_weight = 0
for group in vert.groups:
if group.group == aux_group.index:
aux_weight = group.weight
break
if aux_weight > 0:
# Add weight to humanoid bone group
humanoid_group.add([vert.index], aux_weight, 'ADD')
# Remove auxiliary bone vertex group
mesh_obj.vertex_groups.remove(aux_group)
print(f"Merged weights from {aux_bone_name} to {humanoid_bone} in {mesh_obj.name}")
def merge_humanoid_bone_weights(mesh_obj: bpy.types.Object, avatar_data: dict) -> None:
"""
Process humanoid and auxiliary bone weights for a mesh
Parameters:
mesh_obj: Mesh object to process
avatar_data: Avatar data containing bone mapping information
"""
# Create mapping from boneName to humanoidBoneName
bone_mapping = {}
for bone_map in avatar_data.get("humanoidBones", []):
if "boneName" in bone_map and "humanoidBoneName" in bone_map:
bone_mapping[bone_map["boneName"]] = bone_map["humanoidBoneName"]
# Process auxiliary bones if they exist
auxiliary_bones = avatar_data.get("auxiliaryBones", [])
if auxiliary_bones:
merge_auxiliary_bone_weights(mesh_obj, auxiliary_bones)
def set_humanoid_bone_inherit_scale(armature_obj: bpy.types.Object, avatar_data: dict) -> None:
print("set_humanoid_bone_inherit_scale")
# Humanoidボーンの情報を取得
bone_parents, humanoid_to_bone, bone_to_humanoid = get_humanoid_bone_hierarchy(avatar_data)
# EditModeに切り替え
bpy.context.view_layer.objects.active = armature_obj
bpy.ops.object.mode_set(mode='EDIT')
modified_count = 0
# 各Humanoidボーンに対してInherit Scaleを設定
for humanoid_bone_name, bone_name in humanoid_to_bone.items():
if bone_name in armature_obj.data.edit_bones:
edit_bone = armature_obj.data.edit_bones[bone_name]
# Inherit ScaleがNone以外の場合のみ設定
if edit_bone.inherit_scale != 'NONE':
# UpperChest、胸、つま先、足の指のヒューマノイドボーンはFullに設定
if 'Breast' in humanoid_bone_name or 'UpperChest' in humanoid_bone_name or 'Toe' in humanoid_bone_name or ('Foot' in humanoid_bone_name and ('Index' in humanoid_bone_name or 'Little' in humanoid_bone_name or 'Middle' in humanoid_bone_name or 'Ring' in humanoid_bone_name or 'Thumb' in humanoid_bone_name)):
edit_bone.inherit_scale = 'FULL'
else:
edit_bone.inherit_scale = 'AVERAGE'
modified_count += 1
# ObjectModeに戻る
bpy.ops.object.mode_set(mode='OBJECT')
if modified_count > 0:
print(f"{modified_count}個のHumanoidボーンのInherit ScaleをAverageに設定しました")
else:
print("変更すべきボーンがありませんでした")
def process_base_avatar(base_fbx_path: str, avatar_data_path: str) -> tuple:
"""Process base avatar according to avatar data."""
# Load avatar data
avatar_data = load_avatar_data(avatar_data_path)
# Import base FBX
automatic_bone_orientation_int = avatar_data.get("enableAutomaticBoneOrientation", 0)
if automatic_bone_orientation_int == 1:
import_base_fbx(base_fbx_path, True)
else:
import_base_fbx(base_fbx_path, False)
# Clean up objects and get references
mesh_obj, armature_obj = cleanup_base_objects(avatar_data["meshName"])
set_humanoid_bone_inherit_scale(armature_obj, avatar_data)
# Apply blendshape values if they exist
if mesh_obj and "blendshapes" in avatar_data:
apply_blendshape_values(mesh_obj, avatar_data["blendshapes"])
return mesh_obj, armature_obj, avatar_data
def adjust_armature_hips_position(armature_obj: bpy.types.Object, target_position: Vector, clothing_avatar_data: dict) -> None:
"""
アーマチュアのHipsボーンを指定された位置に移動させる。
子オブジェクトのワールド空間での位置を維持する。
目標位置と現在位置が同じ場合は処理をスキップする。
Parameters:
armature_obj: アーマチュアオブジェクト
target_position: 目標とするHipsボーンのワールド座標
clothing_avatar_data: 衣装のアバターデータ
"""
if not armature_obj or armature_obj.type != 'ARMATURE':
return
# Hipsボーンの名前を取得
hips_bone_name = None
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if bone_map["humanoidBoneName"] == "Hips":
hips_bone_name = bone_map["boneName"]
break
if not hips_bone_name:
print("Warning: Hips bone not found in avatar data")
return
# 現在のHipsボーンのワールド座標を取得
pose_bone = armature_obj.pose.bones.get(hips_bone_name)
if not pose_bone:
print(f"Warning: Bone {hips_bone_name} not found in armature")
return
current_position = armature_obj.matrix_world @ pose_bone.head
# 現在位置と目標位置の差を計算
offset = target_position - current_position
print(f"Hip Offset: {offset}")
# 位置の差が十分小さい場合は処理をスキップ
if offset.length < 0.0001: # 0.1mm未満の差は無視
print("Hips position is already at target position, skipping adjustment")
return
# 現在のアクティブオブジェクトとモードを保存
current_active = bpy.context.active_object
current_mode = current_active.mode if current_active else 'OBJECT'
# オブジェクトモードに切り替え
bpy.ops.object.mode_set(mode='OBJECT')
# アーマチュアの子オブジェクトを取得
children = []
for child in bpy.data.objects:
if child.parent == armature_obj:
# 子オブジェクトの情報を保存
children.append(child)
# 親子関係を解除
for child in children:
# 他のオブジェクトの選択を解除
bpy.ops.object.select_all(action='DESELECT')
# 子オブジェクトを選択してアクティブに
child.select_set(True)
bpy.context.view_layer.objects.active = child
# 親子関係を解除
bpy.ops.object.parent_clear(type='CLEAR_KEEP_TRANSFORM')
# アーマチュアを移動
armature_obj.location += offset
# 子オブジェクトの親子関係を復元
for child in children:
# 他のオブジェクトの選択を解除
bpy.ops.object.select_all(action='DESELECT')
# アーマチュアと子オブジェクトを選択
armature_obj.select_set(True)
child.select_set(True)
bpy.context.view_layer.objects.active = armature_obj
# 親子関係を設定
bpy.ops.object.parent_set(type='OBJECT', keep_transform=True)
# 元のアクティブオブジェクトと選択状態を復元
bpy.ops.object.select_all(action='DESELECT')
if current_active:
current_active.select_set(True)
bpy.context.view_layer.objects.active = current_active
if current_mode != 'OBJECT':
bpy.ops.object.mode_set(mode=current_mode)
# ビューを更新
bpy.context.view_layer.update()
def process_clothing_avatar(input_fbx, clothing_avatar_data_path, hips_position=None, target_meshes=None, mesh_renderers=None):
"""Process clothing avatar."""
original_active = bpy.context.view_layer.objects.active
# Import clothing FBX
bpy.ops.import_scene.fbx(filepath=input_fbx, use_anim=False)
# 非アクティブなオブジェクトとその子を削除
def remove_inactive_objects():
"""非アクティブなオブジェクトとそのすべての子を削除する"""
objects_to_remove = []
def is_object_inactive(obj):
"""オブジェクトが非アクティブかどうかを判定"""
# hide_viewport または hide_render が True の場合、非アクティブと判定
return obj.hide_viewport or obj.hide_render or obj.hide_get()
def collect_children_recursive(obj, collected_list):
"""オブジェクトのすべての子を再帰的に収集"""
for child in obj.children:
collected_list.append(child)
collect_children_recursive(child, collected_list)
# 非アクティブなオブジェクトを探す
for obj in bpy.data.objects:
if is_object_inactive(obj) and obj not in objects_to_remove:
objects_to_remove.append(obj)
# すべての子も収集
collect_children_recursive(obj, objects_to_remove)
# 重複を削除
objects_to_remove = list(set(objects_to_remove))
# オブジェクトを削除
for obj in objects_to_remove:
obj_name = obj.name
try:
bpy.data.objects.remove(obj, do_unlink=True)
print(f"Removed inactive object: {obj_name}")
except Exception as e:
print(f"Failed to remove object {obj_name}: {e}")
remove_inactive_objects()
# Load clothing avatar data
print(f"Loading clothing avatar data from {clothing_avatar_data_path}")
with open(clothing_avatar_data_path, 'r', encoding='utf-8') as f:
clothing_avatar_data = json.load(f)
# Find clothing armature
clothing_armature = None
for obj in bpy.data.objects:
if obj.type == 'ARMATURE' and obj.name != "Armature.BaseAvatar":
clothing_armature = obj
break
if not clothing_armature:
raise Exception("Clothing armature not found")
# Find clothing meshes
clothing_meshes = []
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name != "Body.BaseAvatar" and obj.name != "Body.BaseAvatar.RightOnly" and obj.name != "Body.BaseAvatar.LeftOnly":
# Check if this mesh has an armature modifier
has_armature = False
for modifier in obj.modifiers:
if modifier.type == 'ARMATURE':
has_armature = True
break
if has_armature:
clothing_meshes.append(obj)
# フィルタリング: target_meshesが指定されている場合、それに含まれるメッシュのみを保持
if target_meshes:
target_mesh_list = [name for name in target_meshes.split(',')]
print(f"Target mesh list: {target_mesh_list}")
filtered_meshes = []
for obj in clothing_meshes:
if obj.name in target_mesh_list:
filtered_meshes.append(obj)
else:
# 対象外のメッシュを削除
obj_name = obj.name
bpy.data.objects.remove(obj, do_unlink=True)
print(f"Removed non-target mesh: {obj_name}")
if not filtered_meshes:
raise Exception(f"No target meshes found. Specified: {target_meshes}")
clothing_meshes = filtered_meshes
# Set hips position if provided
if hips_position:
adjust_armature_hips_position(clothing_armature, hips_position, clothing_avatar_data)
# Process mesh renderers if provided
if mesh_renderers:
print(f"Processing mesh renderers: {mesh_renderers}")
for mesh_name, parent_name in mesh_renderers.items():
# MeshRendererを持っていたオブジェクトと同じ名前を持つメッシュオブジェクトを探す
mesh_obj = None
for obj in bpy.data.objects:
if obj.type == 'MESH' and obj.name == mesh_name:
mesh_obj = obj
break
if mesh_obj:
# Armatureモディファイアを持たず、親の名前がデータ内の親オブジェクトの名前と異なるかチェック
has_armature = False
for modifier in mesh_obj.modifiers:
if modifier.type == 'ARMATURE':
has_armature = True
break
current_parent_name = mesh_obj.parent.name if mesh_obj.parent else None
if not has_armature and current_parent_name != parent_name:
# データ内の親オブジェクトの名前と同じ名前を持つボーンをclothing_armatureから探す
bone_found = False
if parent_name in clothing_armature.data.bones:
# ボーンが見つかった場合、そのボーンをメッシュオブジェクトの親にする
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
# メッシュを選択
mesh_obj.select_set(True)
# アーマチュアをアクティブに設定
bpy.context.view_layer.objects.active = clothing_armature
clothing_armature.select_set(True)
# ポーズモードに切り替えてボーンをアクティブに設定
bpy.ops.object.mode_set(mode='POSE')
clothing_armature.data.bones.active = clothing_armature.data.bones[parent_name]
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# ボーンペアレントを設定(keep_transformでワールド座標を保持)
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
print(f"Set parent bone '{parent_name}' for mesh '{mesh_name}' (world transform preserved)")
bone_found = True
# 選択を解除
bpy.ops.object.select_all(action='DESELECT')
if not bone_found:
print(f"Warning: Bone '{parent_name}' not found in clothing_armature for mesh '{mesh_name}'")
else:
if has_armature:
print(f"Skipping mesh '{mesh_name}': already has Armature modifier")
else:
print(f"Skipping mesh '{mesh_name}': parent already matches ('{current_parent_name}')")
else:
print(f"Warning: Mesh object '{mesh_name}' not found")
bpy.context.view_layer.objects.active = original_active
return clothing_meshes, clothing_armature, clothing_avatar_data
def setup_weight_transfer() -> None:
"""Setup the Robust Weight Transfer plugin settings."""
try:
bpy.context.scene.robust_weight_transfer_settings.source_object = bpy.data.objects["Body.BaseAvatar"]
except Exception as e:
raise Exception(f"Failed to setup weight transfer: {str(e)}")
def triangulate_mesh(obj: bpy.types.Object) -> None:
"""
現在の3DView上でのレンダリングにおける三角面への分割をそのまま利用して、
メッシュのすべての面を三角面に変換する。
Args:
obj: 三角分割するメッシュオブジェクト
"""
if obj is None or obj.type != 'MESH':
return
# 元のアクティブオブジェクトを保存
original_active = bpy.context.view_layer.objects.active
try:
# オブジェクトをアクティブに設定
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
# エディットモードに切り替え
bpy.ops.object.mode_set(mode='EDIT')
# 全ての面を選択
bpy.ops.mesh.select_all(action='SELECT')
# 三角分割を実行(bmeshの三角分割を使用)
bpy.ops.mesh.quads_convert_to_tris(quad_method='FIXED', ngon_method='BEAUTY')
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
print(f"Triangulated mesh: {obj.name}")
except Exception as e:
print(f"Error triangulating mesh {obj.name}: {e}")
# エラーが発生した場合もオブジェクトモードに戻る
try:
bpy.ops.object.mode_set(mode='OBJECT')
except:
pass
finally:
# 元のアクティブオブジェクトを復元
if original_active:
bpy.context.view_layer.objects.active = original_active
obj.select_set(False)
def build_bone_hierarchy(bone_node: dict, bone_parents: Dict[str, str], current_path: list):
"""
ボーン階層から親子関係のマッピングを再帰的に構築する
Parameters:
bone_node (dict): 現在のボーンノード
bone_parents (Dict[str, str]): ボーン名から親ボーン名へのマッピング
current_path (list): 現在のパス上のボーン名のリスト
"""
bone_name = bone_node['name']
if current_path:
bone_parents[bone_name] = current_path[-1]
current_path.append(bone_name)
for child in bone_node.get('children', []):
build_bone_hierarchy(child, bone_parents, current_path)
current_path.pop()
def get_humanoid_bone_hierarchy(avatar_data: dict) -> Tuple[Dict[str, str], Dict[str, str], Dict[str, str]]:
"""
アバターデータからHumanoidボーンの階層関係を抽出する
Parameters:
avatar_data (dict): アバターデータ
Returns:
Tuple[Dict[str, str], Dict[str, str], Dict[str, str]]:
(ボーン名から親への辞書, Humanoidボーン名からボーン名への辞書, ボーン名からHumanoidボーン名への辞書)
"""
# ボーンの親子関係を構築
bone_parents = {}
build_bone_hierarchy(avatar_data['boneHierarchy'], bone_parents, [])
# Humanoidボーン名とボーン名の対応マップを作成
humanoid_to_bone = {bone_map['humanoidBoneName']: bone_map['boneName']
for bone_map in avatar_data['humanoidBones']}
bone_to_humanoid = {bone_map['boneName']: bone_map['humanoidBoneName']
for bone_map in avatar_data['humanoidBones']}
return bone_parents, humanoid_to_bone, bone_to_humanoid
def find_nearest_parent_with_pose(bone_name: str,
bone_parents: Dict[str, str],
bone_to_humanoid: Dict[str, str],
pose_data: dict) -> Optional[str]:
"""
指定されたボーンの親を辿り、ポーズデータを持つ最も近い親のHumanoidボーン名を返す
Parameters:
bone_name (str): 開始ボーン名
bone_parents (Dict[str, str]): ボーンの親子関係辞書
bone_to_humanoid (Dict[str, str]): ボーン名からHumanoidボーン名への変換辞書
pose_data (dict): ポーズデータ
Returns:
Optional[str]: 見つかった親のHumanoidボーン名、見つからない場合はNone
"""
current_bone = bone_name
while current_bone in bone_parents:
parent_bone = bone_parents[current_bone]
if parent_bone in bone_to_humanoid:
parent_humanoid = bone_to_humanoid[parent_bone]
if parent_humanoid in pose_data:
return parent_humanoid
current_bone = parent_bone
return None
def clear_humanoid_bone_relations_preserve_pose(armature_obj, clothing_avatar_data_filepath, base_avatar_data_filepath):
"""
Humanoidボーンの親子関係を解除しながらワールド空間でのポーズを保持する。
ベースアバターのアバターデータにないHumanoidボーンの親子関係は保持する。
Args:
armature_obj: bpy.types.Object - アーマチュアオブジェクト
clothing_avatar_data_filepath: str - 衣装のアバターデータのJSONファイル名
base_avatar_data_filepath: str - ベースアバターのアバターデータのJSONファイル名
"""
if armature_obj.type != 'ARMATURE':
raise ValueError("Selected object must be an armature")
# アバターデータを読み込む
clothing_avatar_data = load_avatar_data(clothing_avatar_data_filepath)
base_avatar_data = load_avatar_data(base_avatar_data_filepath)
# 衣装のHumanoidボーンのセットを作成
clothing_humanoid_bones = {bone_map['boneName'] for bone_map in clothing_avatar_data['humanoidBones']}
# ベースアバターのHumanoidボーンのセットを作成
base_humanoid_bones = {bone_map['humanoidBoneName'] for bone_map in base_avatar_data['humanoidBones']}
# 衣装のHumanoidボーン名からHumanoidボーン名への変換マップを作成
clothing_bone_to_humanoid = {bone_map['boneName']: bone_map['humanoidBoneName']
for bone_map in clothing_avatar_data['humanoidBones']}
# 親子関係を解除するボーンを特定(ベースアバターにも存在するHumanoidボーンのみ)
bones_to_unparent = set()
for bone_name in clothing_humanoid_bones:
humanoid_name = clothing_bone_to_humanoid.get(bone_name)
if humanoid_name == "UpperChest" or humanoid_name == "LeftBreast" or humanoid_name == "RightBreast" or humanoid_name == "LeftToes" or humanoid_name == "RightToes":
continue
bones_to_unparent.add(bone_name)
#if humanoid_name in base_humanoid_bones:
# bones_to_unparent.add(bone_name)
# Get the armature data
armature = armature_obj.data
# Store original world space matrices for bones to unparent
original_matrices = {}
for bone in armature.bones:
if bone.name in bones_to_unparent:
pose_bone = armature_obj.pose.bones[bone.name]
original_matrices[bone.name] = armature_obj.matrix_world @ pose_bone.matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
# Switch to edit mode to modify bone relations
bpy.context.view_layer.objects.active = armature_obj
original_mode = bpy.context.object.mode
bpy.ops.object.mode_set(mode='EDIT')
# Clear parent relationships for specified bones only
for edit_bone in armature.edit_bones:
if edit_bone.name in bones_to_unparent:
edit_bone.parent = None
# Return to pose mode
bpy.ops.object.mode_set(mode='POSE')
# Restore original world space positions for unparented bones
for bone_name, original_matrix in original_matrices.items():
pose_bone = armature_obj.pose.bones[bone_name]
pose_bone.matrix = armature_obj.matrix_world.inverted() @ original_matrix
# Return to original mode
bpy.ops.object.mode_set(mode=original_mode)
def is_finger_bone(humanoid_bone: str) -> bool:
"""
指のボーンかどうかを判定する
Parameters:
humanoid_bone (str): Humanoidボーン名
Returns:
bool: 指のボーンの場合True
"""
finger_keywords = [
"Thumb", "Index", "Middle", "Ring", "Little",
"Toe"
]
return any(keyword in humanoid_bone for keyword in finger_keywords)
def get_next_joint_bone(humanoid_bone: str) -> Optional[str]:
"""
指の次の関節のボーン名を取得する
Parameters:
humanoid_bone (str): Humanoidボーン名
Returns:
Optional[str]: 次の関節のボーン名、存在しない場合None
"""
joint_mapping = {
"Proximal": "Intermediate",
"Intermediate": "Distal",
}
# 現在の関節タイプを特定
current_joint = None
for joint_type in joint_mapping.keys():
if joint_type in humanoid_bone:
current_joint = joint_type
break
if not current_joint:
return None
# 次の関節のボーン名を生成
next_joint = joint_mapping[current_joint]
return humanoid_bone.replace(current_joint, next_joint)
def apply_finger_bone_adjustments(
armature_obj: bpy.types.Object,
humanoid_to_bone: Dict[str, str],
bone_to_humanoid: Dict[str, str]
) -> None:
"""
指のボーンの位置を調整する
各ボーンのTailが次の関節のHeadと一致するように調整
Parameters:
armature_obj: アーマチュアオブジェクト
humanoid_to_bone: Humanoidボーン名からボーン名への変換辞書
bone_to_humanoid: ボーン名からHumanoidボーン名への変換辞書
"""
# すべての指ボーンについて処理
for bone_name, pose_bone in armature_obj.pose.bones.items():
if bone_name not in bone_to_humanoid:
continue
humanoid_bone = bone_to_humanoid[bone_name]
if not is_finger_bone(humanoid_bone):
continue
# 次の関節を取得
next_humanoid_bone = get_next_joint_bone(humanoid_bone)
if not next_humanoid_bone or next_humanoid_bone not in humanoid_to_bone:
continue
next_bone_name = humanoid_to_bone[next_humanoid_bone]
if next_bone_name not in armature_obj.pose.bones:
continue
next_bone = armature_obj.pose.bones[next_bone_name]
# 現在のボーンの方向ベクトルを取得
current_dir = ((armature_obj.matrix_world @ pose_bone.tail) - (armature_obj.matrix_world @ pose_bone.head)).normalized()
# 世界空間での位置を計算
head_world = armature_obj.matrix_world @ pose_bone.head
next_head_world = armature_obj.matrix_world @ next_bone.head
# 新しい方向ベクトルを計算
new_dir = (next_head_world - head_world).normalized()
# 回転の差分を計算
#rot_diff = new_dir.rotation_difference(current_dir)
rot_diff = current_dir.rotation_difference(new_dir)
# 現在の行列を取得
current_matrix = pose_bone.matrix.copy()
translation, rotation, scale = current_matrix.decompose()
trans_mat = Matrix.Translation(translation)
# 回転を適用した新しい行列を作成
rot_matrix = rot_diff.to_matrix().to_4x4()
new_matrix = trans_mat @ rot_matrix @ trans_mat.inverted() @ current_matrix
# 新しい行列を適用
pose_bone.matrix = new_matrix
def list_to_matrix(matrix_list):
"""
リストからMatrix型に変換する(JSON読み込み用)
Parameters:
matrix_list: list - 行列のデータを含む2次元リスト
Returns:
Matrix: 変換された行列
"""
return Matrix(matrix_list)
def add_pose_from_json(armature_obj, filepath, avatar_data, invert=False):
"""
JSONファイルから読み込んだポーズデータをアクティブなArmatureの現在のポーズに加算する
Parameters:
armature_obj: アーマチュアオブジェクト
filepath (str): 読み込むJSONファイルのパス
avatar_data (dict): アバターデータ
invert (bool): 逆変換を適用するかどうか
"""
# アクティブオブジェクトを取得
if not armature_obj:
raise ValueError("No active object found")
if armature_obj.type != 'ARMATURE':
raise ValueError(f"Active object '{armature_obj.name}' is not an armature")
# 階層関係と変換マップを取得
bone_parents, humanoid_to_bone, bone_to_humanoid = get_humanoid_bone_hierarchy(avatar_data)
# ファイルの存在確認
if not os.path.exists(filepath):
raise FileNotFoundError(f"Pose data file not found: {filepath}")
# JSONファイルを読み込む
with open(filepath, 'r', encoding='utf-8') as f:
pose_data = json.load(f)
# アンドゥ用にステップを作成
bpy.ops.ed.undo_push(message="Add Pose from JSON")
bpy.ops.object.mode_set(mode='OBJECT')
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = armature_obj
# エディットモードに切り替え
bpy.ops.object.mode_set(mode='EDIT')
# すべての編集ボーンのConnectedを解除
for bone in armature_obj.data.edit_bones:
bone.use_connect = False
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# 親子関係を維持したまま処理するため、階層順序でボーンを取得
def get_bone_hierarchy_order():
"""親から子への順序でHumanoidボーンを取得"""
order = []
visited = set()
def add_bone_and_children(humanoid_bone):
if humanoid_bone in visited:
return
visited.add(humanoid_bone)
order.append(humanoid_bone)
# 子ボーンを検索
for child_bone, parent_bone in bone_parents.items():
if parent_bone == humanoid_bone and child_bone not in visited:
add_bone_and_children(child_bone)
# ルートボーン(Hips)から開始
root_bones = []
root_bones.append(humanoid_to_bone['Hips'])
for root_bone in root_bones:
add_bone_and_children(root_bone)
return order
bone_order = get_bone_hierarchy_order()
# 処理済みのHumanoidボーンを記録する辞書
processed_bones = {}
# 事前にすべてのボーンの変形前の状態を保存
original_bone_data = {}
for humanoid_bone in humanoid_to_bone.keys():
bone_name = humanoid_to_bone.get(humanoid_bone)
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
original_bone_data[humanoid_bone] = {
'matrix': bone.matrix.copy(),
'head': bone.head.copy(),
'tail': bone.tail.copy(),
'bone_name': bone_name
}
# 階層順序でポーズデータの計算を実行
for bone_name in bone_order:
if not bone_name or bone_name not in armature_obj.pose.bones:
continue
humanoid_bone = bone_to_humanoid.get(bone_name)
if not humanoid_bone:
continue
# 既に処理済みの場合はスキップ
if humanoid_bone in processed_bones:
continue
# ポーズデータを直接持っているか、親から継承するかを決定
source_humanoid_bone = humanoid_bone
if humanoid_bone not in pose_data:
parent_with_pose = find_nearest_parent_with_pose(
bone_name, bone_parents, bone_to_humanoid, pose_data)
if not parent_with_pose:
continue
source_humanoid_bone = parent_with_pose
print(f"Using pose data from parent bone {source_humanoid_bone} for {humanoid_bone}")
# 保存されたオリジナルデータを使用して計算
if humanoid_bone not in original_bone_data:
continue
bone = armature_obj.pose.bones[bone_name]
original_data = original_bone_data[humanoid_bone]
# 現在のワールド空間での行列を取得(オリジナルデータを使用)
current_world_matrix = armature_obj.matrix_world @ original_data['matrix']
# 差分変換行列を取得
delta_matrix = list_to_matrix(pose_data[source_humanoid_bone]['delta_matrix'])
if invert:
delta_matrix = delta_matrix.inverted()
# 現在の行列に加算
combined_matrix = delta_matrix @ current_world_matrix
# ローカル空間に変換して適用
bone.matrix = armature_obj.matrix_world.inverted() @ combined_matrix
# 変更を即座に反映(子ボーンの計算に影響するため)
bpy.context.view_layer.update()
# 処理済みとしてマーク
processed_bones[humanoid_bone] = True
# 最終的なポーズの更新を強制
bpy.context.view_layer.update()
print(f"Pose data added to armature '{armature_obj.name}' from {filepath}")
def add_clothing_pose_from_json(armature_obj, pose_filepath="pose_data.json", init_pose_filepath="initial_pose.json", clothing_avatar_data_filepath="avatar_data.json", base_avatar_data_filepath="avatar_data.json", invert=False):
"""
JSONファイルから読み込んだポーズデータをアクティブなArmatureの現在のポーズに加算する
Parameters:
filename (str): 読み込むJSONファイルの名前
avatar_data_file (str): アバターデータのJSONファイル名
invert (bool): 逆変換を適用するかどうか
"""
if not armature_obj:
raise ValueError("No active object found")
if armature_obj.type != 'ARMATURE':
raise ValueError(f"Active object '{armature_obj.name}' is not an armature")
# アバターデータを読み込む
avatar_data = load_avatar_data(clothing_avatar_data_filepath)
# 階層関係と変換マップを取得
bone_parents, humanoid_to_bone, bone_to_humanoid = get_humanoid_bone_hierarchy(avatar_data)
# ファイルの存在確認
if not os.path.exists(pose_filepath):
raise FileNotFoundError(f"Pose data file not found: {pose_filepath}")
# JSONファイルを読み込む
with open(pose_filepath, 'r', encoding='utf-8') as f:
pose_data = json.load(f)
# アンドゥ用にステップを作成
bpy.ops.ed.undo_push(message="Add Pose from JSON")
bpy.ops.object.mode_set(mode='OBJECT')
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = armature_obj
# エディットモードに切り替え
bpy.ops.object.mode_set(mode='EDIT')
# すべての編集ボーンのConnectedを解除
for bone in armature_obj.data.edit_bones:
bone.use_connect = False
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# 親子関係を維持したまま処理するため、階層順序でボーンを取得
def get_bone_hierarchy_order():
"""親から子への順序でHumanoidボーンを取得"""
order = []
visited = set()
def add_bone_and_children(humanoid_bone):
if humanoid_bone in visited:
return
visited.add(humanoid_bone)
order.append(humanoid_bone)
# 子ボーンを検索
for child_bone, parent_bone in bone_parents.items():
if parent_bone == humanoid_bone and child_bone not in visited:
add_bone_and_children(child_bone)
# ルートボーン(Hips)から開始
root_bones = []
root_bones.append(humanoid_to_bone['Hips'])
for root_bone in root_bones:
add_bone_and_children(root_bone)
return order
bone_order = get_bone_hierarchy_order()
# Humanoidボーンの親子関係を解除
clear_humanoid_bone_relations_preserve_pose(armature_obj, clothing_avatar_data_filepath, base_avatar_data_filepath)
bpy.context.view_layer.update()
# ポーズを適用する前に現在のポーズを記録
store_pose_globally(armature_obj)
print(f"Pose state stored globally before applying pose from {pose_filepath}")
# 初期ポーズの適用(新しい独立した関数を使用)
if init_pose_filepath:
apply_initial_pose_to_armature(armature_obj, init_pose_filepath, clothing_avatar_data_filepath)
# 処理済みのHumanoidボーンを記録する辞書
processed_bones = {}
# ポーズデータを現在のポーズに加算
for humanoid_bone in bone_to_humanoid.values():
# 既に処理済みの場合はスキップ
if humanoid_bone in processed_bones:
continue
if humanoid_bone == "UpperChest" or \
humanoid_bone == "LeftBreast" or humanoid_bone == "RightBreast" or \
humanoid_bone == "LeftToes" or humanoid_bone == "RightToes":
continue
bone_name = humanoid_to_bone.get(humanoid_bone)
if not bone_name or bone_name not in armature_obj.pose.bones:
continue
# ポーズデータを直接持っているか、親から継承するかを決定
source_humanoid_bone = humanoid_bone
if humanoid_bone not in pose_data:
parent_with_pose = find_nearest_parent_with_pose(
bone_name, bone_parents, bone_to_humanoid, pose_data)
if not parent_with_pose:
continue
source_humanoid_bone = parent_with_pose
print(f"Using pose data from parent bone {source_humanoid_bone} for {humanoid_bone}")
# ポーズデータを適用
bone = armature_obj.pose.bones[bone_name]
# 現在のワールド空間での行列を取得
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# 差分変換行列を取得
delta_matrix = list_to_matrix(pose_data[source_humanoid_bone]['delta_matrix'])
if invert:
delta_matrix = delta_matrix.inverted()
# 現在の行列に加算
combined_matrix = delta_matrix @ current_world_matrix
# ローカル空間に変換して適用
bone.matrix = armature_obj.matrix_world.inverted() @ combined_matrix
# 処理済みとしてマーク
processed_bones[humanoid_bone] = True
# ポーズの更新を強制
bpy.context.view_layer.update()
print(f"Pose data added to armature '{armature_obj.name}' from {pose_filepath}")
for bone_name in armature_obj.pose.bones.keys():
if bone_name in bone_to_humanoid:
humanoid_name = bone_to_humanoid[bone_name]
if humanoid_name in processed_bones:
mat = armature_obj.pose.bones[bone_name].matrix
print(f"'{humanoid_name}' ({bone_name}) bone.matrix_final {mat}")
def remove_empty_vertex_groups(mesh_obj: bpy.types.Object) -> None:
"""Remove vertex groups that are empty or have zero weights for all vertices."""
if not mesh_obj.type == 'MESH' or not mesh_obj.vertex_groups:
return
groups_to_remove = []
for vgroup in mesh_obj.vertex_groups:
has_weights = False
for vert in mesh_obj.data.vertices:
weight_index = vgroup.index
for g in vert.groups:
if g.group == weight_index and g.weight > 0.0005:
has_weights = True
break
if has_weights:
break
if not has_weights:
groups_to_remove.append(vgroup.name)
for group_name in groups_to_remove:
if group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.remove(mesh_obj.vertex_groups[group_name])
print(f"Removed empty vertex group: {group_name}")
def find_parent_bone_hierarchy(current_node: dict, target_bone: str, parent_bone: str = None) -> str:
"""
Recursively search for a bone in the hierarchy and return its parent.
Parameters:
current_node: Current node in the bone hierarchy
target_bone: Name of the bone to find
parent_bone: Name of the parent bone (used in recursion)
Returns:
Name of the parent bone or None if not found
"""
# Check if current node is the target
if current_node["name"] == target_bone:
return parent_bone
# Search children
for child in current_node.get("children", []):
result = find_parent_bone_hierarchy(child, target_bone, current_node["name"])
if result is not None:
return result
return None
def get_bone_parent_map(bone_hierarchy: dict) -> dict:
"""
Create a map of bones to their parents from the hierarchy.
Parameters:
bone_hierarchy: Bone hierarchy data from avatar data
Returns:
Dictionary mapping bone names to their parent bone names
"""
parent_map = {}
def traverse_hierarchy(node, parent=None):
current_bone = node["name"]
parent_map[current_bone] = parent
for child in node.get("children", []):
traverse_hierarchy(child, current_bone)
traverse_hierarchy(bone_hierarchy)
return parent_map
def merge_weights_to_parent(mesh_obj: bpy.types.Object, source_bone: str, target_bone: str) -> None:
"""
Merge weights from source bone to target bone and remove source bone vertex group.
Parameters:
mesh_obj: Mesh object to process
source_bone: Name of the source bone (whose weights will be moved)
target_bone: Name of the target bone (that will receive the weights)
"""
source_group = mesh_obj.vertex_groups.get(source_bone)
target_group = mesh_obj.vertex_groups.get(target_bone)
if not source_group:
return
if not target_group:
# Create target group if it doesn't exist
target_group = mesh_obj.vertex_groups.new(name=target_bone)
# Transfer weights
for vert in mesh_obj.data.vertices:
source_weight = 0
for group in vert.groups:
if group.group == source_group.index:
source_weight = group.weight
break
if source_weight > 0:
target_group.add([vert.index], source_weight, 'ADD')
# Remove source group
mesh_obj.vertex_groups.remove(source_group)
print(f"Merged weights from {source_bone} to {target_bone} in {mesh_obj.name}")
def apply_bone_name_conversion(clothing_armature: bpy.types.Object, clothing_meshes: list, name_conv_data: dict) -> None:
"""
JSONファイルで指定されたボーンの名前変更マッピングに従って、
clothing_armatureのボーンとclothing_meshesの頂点グループの名前を変更する
Parameters:
clothing_armature: 服のアーマチュアオブジェクト
clothing_meshes: 服のメッシュオブジェクトのリスト
name_conv_data: ボーン名前変更マッピングのJSONデータ
"""
if not name_conv_data or 'boneMapping' not in name_conv_data:
print("ボーン名前変更データが見つかりません")
return
bone_mappings = name_conv_data['boneMapping']
renamed_bones = {}
print(f"ボーン名前変更処理を開始: {len(bone_mappings)}個のマッピング")
# 1. アーマチュアのボーン名を変更
if clothing_armature and clothing_armature.type == 'ARMATURE':
# Edit modeに入ってボーン名を変更
bpy.context.view_layer.objects.active = clothing_armature
bpy.ops.object.mode_set(mode='EDIT')
for mapping in bone_mappings:
fbx_bone = mapping.get('fbxBone')
prefab_bone = mapping.get('prefabBone')
if not fbx_bone or not prefab_bone or fbx_bone == prefab_bone:
continue
# アーマチュア内でfbxBoneに対応するボーンを探す
if fbx_bone in clothing_armature.data.edit_bones:
edit_bone = clothing_armature.data.edit_bones[fbx_bone]
edit_bone.name = prefab_bone
renamed_bones[fbx_bone] = prefab_bone
print(f"アーマチュアのボーン名を変更: {fbx_bone} -> {prefab_bone}")
bpy.ops.object.mode_set(mode='OBJECT')
# 2. メッシュの頂点グループ名を変更
for mesh_obj in clothing_meshes:
if not mesh_obj or mesh_obj.type != 'MESH':
continue
for mapping in bone_mappings:
fbx_bone = mapping.get('fbxBone')
prefab_bone = mapping.get('prefabBone')
if not fbx_bone or not prefab_bone or fbx_bone == prefab_bone:
continue
# 頂点グループの名前を変更
if fbx_bone in mesh_obj.vertex_groups:
vertex_group = mesh_obj.vertex_groups[fbx_bone]
vertex_group.name = prefab_bone
print(f"メッシュ {mesh_obj.name} の頂点グループ名を変更: {fbx_bone} -> {prefab_bone}")
print(f"ボーン名前変更処理完了: {len(renamed_bones)}個のボーンが変更されました")
def normalize_clothing_bone_names(clothing_armature: bpy.types.Object, clothing_avatar_data: dict,
clothing_meshes: list) -> None:
"""
Normalize bone names in clothing_avatar_data to match existing bones in clothing_armature.
For each humanoidBone in clothing_avatar_data:
1. Check if boneName exists in clothing_armature
2. If not, convert boneName to lowercase alphabetic characters and find matching bone
3. Update boneName in clothing_avatar_data if match found
4. Update corresponding vertex group names in all clothing_meshes
"""
import re
# Get all bone names from clothing armature
armature_bone_names = {bone.name for bone in clothing_armature.data.bones}
print(f"Available bones in clothing armature: {sorted(armature_bone_names)}")
# Store name changes for vertex group updates
bone_name_changes = {}
# Process each humanoid bone mapping
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if "boneName" not in bone_map:
continue
original_bone_name = bone_map["boneName"]
# Check if bone exists in armature
if original_bone_name in armature_bone_names:
print(f"Bone '{original_bone_name}' found in armature")
continue
# Extract alphabetic characters and convert to lowercase
normalized_pattern = re.sub(r'[^a-zA-Z]', '', original_bone_name).lower()
if not normalized_pattern:
print(f"Warning: No alphabetic characters found in bone name '{original_bone_name}'")
continue
print(f"Looking for bone matching pattern '{normalized_pattern}' (from '{original_bone_name}')")
# Find matching bone in armature
matching_bone = None
for armature_bone_name in armature_bone_names:
armature_normalized = re.sub(r'[^a-zA-Z]', '', armature_bone_name).lower()
if armature_normalized == normalized_pattern:
matching_bone = armature_bone_name
break
if matching_bone:
print(f"Found matching bone: '{original_bone_name}' -> '{matching_bone}'")
bone_name_changes[matching_bone] = original_bone_name
else:
print(f"Warning: No matching bone found for '{original_bone_name}' (pattern: '{normalized_pattern}')")
# Update vertex group names in all clothing meshes
if bone_name_changes:
print(f"Updating vertex groups with bone name changes: {bone_name_changes}")
for mesh_obj in clothing_meshes:
if not mesh_obj or mesh_obj.type != 'MESH':
continue
for old_name, new_name in bone_name_changes.items():
if old_name in mesh_obj.vertex_groups:
vertex_group = mesh_obj.vertex_groups[old_name]
vertex_group.name = new_name
print(f"Updated vertex group '{old_name}' -> '{new_name}' in mesh '{mesh_obj.name}'")
# Update bone names in clothing armature
print(f"Updating bone names in clothing armature: {bone_name_changes}")
for old_name, new_name in bone_name_changes.items():
if old_name in clothing_armature.data.bones:
bone = clothing_armature.data.bones[old_name]
bone.name = new_name
print(f"Updated armature bone '{old_name}' -> '{new_name}'")
print("Bone name normalization completed")
def apply_initial_pose_to_armature(armature_obj, init_pose_filepath, clothing_avatar_data_filepath):
"""
Apply initial pose from JSON to the armature.
Parameters:
armature_obj: Target armature object
init_pose_filepath: Path to initial pose JSON file
clothing_avatar_data_filepath: Path to avatar data JSON file
"""
if not init_pose_filepath or not os.path.exists(init_pose_filepath):
return
# アバターデータを読み込む
avatar_data = load_avatar_data(clothing_avatar_data_filepath)
# 階層関係と変換マップを取得
bone_parents, humanoid_to_bone, bone_to_humanoid = get_humanoid_bone_hierarchy(avatar_data)
# 親から子への順序でHumanoidボーンを取得
def get_bone_hierarchy_order():
order = []
visited = set()
def add_bone_and_children(humanoid_bone):
if humanoid_bone in visited:
return
visited.add(humanoid_bone)
order.append(humanoid_bone)
# 子ボーンを検索
for child_bone, parent_bone in bone_parents.items():
if parent_bone == humanoid_bone and child_bone not in visited:
add_bone_and_children(child_bone)
# ルートボーン(Hips)から開始
root_bones = []
root_bones.append(humanoid_to_bone['Hips'])
for root_bone in root_bones:
add_bone_and_children(root_bone)
return order
bone_order = get_bone_hierarchy_order()
# 初期ポーズの適用
with open(init_pose_filepath, 'r', encoding='utf-8') as f:
init_pose_data = json.load(f)
# ボーン名をキーとしたマッピングを作成
bone_transforms = {}
for bone_data in init_pose_data.get("bones", []):
bone_name = bone_data["boneName"]
transform = bone_data["transform"]
bone_transforms[bone_name] = transform
# 処理済みのHumanoidボーンを記録する辞書
processed_bones = {}
# 事前にすべてのボーンの変形前の状態を保存
original_bone_data = {}
for bone_name in bone_order:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
original_bone_data[bone_name] = {
'matrix': bone.matrix.copy(),
'head': bone.head.copy(),
'tail': bone.tail.copy(),
}
# アーマチュアの各ボーンに初期ポーズを適用
for bone_name in bone_order:
if not bone_name or bone_name not in armature_obj.pose.bones:
continue
# 既に処理済みの場合はスキップ
if bone_name in processed_bones:
continue
# 保存されたオリジナルデータを使用して計算
if bone_name not in original_bone_data:
continue
if bone_name not in bone_transforms:
continue
bone = armature_obj.pose.bones[bone_name]
original_data = original_bone_data[bone_name]
# 現在のワールド空間での行列を取得(オリジナルデータを使用)
current_world_matrix = armature_obj.matrix_world @ original_data['matrix']
transform = bone_transforms[bone_name]
# delta_matrixが存在するかチェック
if "delta_matrix" in transform:
# 差分変換行列を取得
delta_matrix = list_to_matrix(transform['delta_matrix'])
# 現在の行列に適用
combined_matrix = delta_matrix @ current_world_matrix
# ローカル空間に変換して適用
bone.matrix = armature_obj.matrix_world.inverted() @ combined_matrix
else:
# 後方互換性のため、古い形式(position, rotation, scale)もサポート
# 位置を設定
pos = transform.get("position", [0, 0, 0])
init_loc = Vector((pos[0], pos[1], pos[2]))
# 回転を設定(度数からラジアンに変換)
rot = transform.get("rotation", [0, 0, 0])
init_rot = Euler([math.radians(r) for r in rot], 'XYZ')
# スケールを設定
scale = transform.get("scale", [1, 1, 1])
init_scale = Vector((scale[0], scale[1], scale[2]))
head_world = armature_obj.matrix_world @ bone.head
offset_matrix = Matrix.Translation(head_world)
# 新しい行列を作成
delta_matrix = Matrix.Translation(init_loc) @ \
init_rot.to_matrix().to_4x4() @ \
Matrix.Scale(init_scale.x, 4, (1, 0, 0)) @ \
Matrix.Scale(init_scale.y, 4, (0, 1, 0)) @ \
Matrix.Scale(init_scale.z, 4, (0, 0, 1))
# 現在の行列に加算
combined_matrix = offset_matrix @ delta_matrix @ offset_matrix.inverted() @ current_world_matrix
# ローカル空間に変換して適用
bone.matrix = armature_obj.matrix_world.inverted() @ combined_matrix
# 変更を即座に反映(子ボーンの計算に影響するため)
bpy.context.view_layer.update()
# 処理済みとしてマーク
processed_bones[bone_name] = True
# ビューを更新
bpy.context.view_layer.update()
def is_A_pose(avatar_data: dict, armature: bpy.types.Object, init_pose_filepath=None, pose_filepath=None, clothing_avatar_data_filepath=None) -> bool:
"""
Check if the avatar data is in A pose.
Creates a temporary copy of the armature, applies initial pose, checks A-pose, then deletes the copy.
Parameters:
avatar_data: Avatar data dictionary
armature: Target armature object
init_pose_filepath: Path to initial pose JSON file (optional)
clothing_avatar_data_filepath: Path to avatar data JSON file (optional)
"""
# 一時的にarmatureをコピー
original_active = bpy.context.view_layer.objects.active
original_mode = armature.mode if hasattr(armature, 'mode') else 'OBJECT'
# オブジェクトモードに切り替え
if bpy.context.object and bpy.context.object.mode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
# 選択を解除
bpy.ops.object.select_all(action='DESELECT')
# アーマチュアをコピー
armature.select_set(True)
bpy.context.view_layer.objects.active = armature
bpy.ops.object.duplicate()
temp_armature = bpy.context.active_object
temp_armature.name = f"{armature.name}_temp_A_pose_check"
try:
# 初期ポーズを適用
if init_pose_filepath and clothing_avatar_data_filepath:
apply_initial_pose_to_armature(temp_armature, init_pose_filepath, clothing_avatar_data_filepath)
if pose_filepath and clothing_avatar_data_filepath:
with open(clothing_avatar_data_filepath, 'r', encoding='utf-8') as f:
clothing_avatar_data = json.load(f)
add_pose_from_json(temp_armature, pose_filepath, clothing_avatar_data, invert=False)
# Create mappings for clothing
humanoid_to_bone = {}
for bone_map in avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
arm_bone = None
lower_arm_bone = None
for bone in temp_armature.pose.bones:
if bone.name == humanoid_to_bone.get("LeftUpperArm"):
for bone2 in temp_armature.pose.bones:
if bone2.name == humanoid_to_bone.get("LeftLowerArm"):
lower_arm_bone = bone2
break
if lower_arm_bone:
arm_bone = bone
break
elif bone.name == humanoid_to_bone.get("RightUpperArm"):
for bone2 in temp_armature.pose.bones:
if bone2.name == humanoid_to_bone.get("RightLowerArm"):
lower_arm_bone = bone2
break
if lower_arm_bone:
arm_bone = bone
break
result = False
if arm_bone and lower_arm_bone:
arm_bone_direction = (temp_armature.matrix_world @ lower_arm_bone.head) - (temp_armature.matrix_world @ arm_bone.head)
arm_bone_direction = arm_bone_direction.normalized()
arm_bone_angle = math.acos(abs(arm_bone_direction.dot(Vector((1, 0, 0)))))
print(f"arm_bone: {arm_bone.name}")
print(f"lower_arm_bone: {lower_arm_bone.name}")
print(f"arm_bone_head: {temp_armature.matrix_world @ arm_bone.head}")
print(f"lower_arm_bone_head: {temp_armature.matrix_world @ lower_arm_bone.head}")
print(f"arm_bone_direction: {arm_bone_direction}")
print(f"arm_bone_angle: {math.degrees(arm_bone_angle)}")
if math.degrees(arm_bone_angle) > 30:
result = True
else:
result = False
else:
result = False
finally:
# 一時的なarmatureを削除
bpy.ops.object.select_all(action='DESELECT')
temp_armature.select_set(True)
bpy.context.view_layer.objects.active = temp_armature
bpy.ops.object.delete()
# 元のアクティブオブジェクトを復元
if original_active:
bpy.context.view_layer.objects.active = original_active
return result
def generate_temp_shapekeys_for_weight_transfer(obj: bpy.types.Object, armature_obj: bpy.types.Object, avatar_data: dict, is_A_pose: bool) -> None:
"""
Generate temp shapekeys for weight transfer.
"""
if obj.type != 'MESH':
return
set_armature_modifier_visibility(obj, True, True)
for sk in obj.data.shape_keys.key_blocks:
if sk.name != "Basis":
if sk.name == "SymmetricDeformed":
sk.value = 1.0
else:
sk.value = 0.0
A_pose_shape_verts = None
crotch_shape_verts = None
original_shape_key_state = save_shape_key_state(obj)
if is_A_pose:
restore_shape_key_state(obj, original_shape_key_state)
# 左右の腕全体のボーンにY軸回転を適用
print(" 左右の腕全体のボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = armature_obj
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesから左右の腕全体のboneNameを取得
left_arm_humanoid_names = [
"LeftUpperArm", "LeftLowerArm", "LeftHand",
"LeftThumbProximal", "LeftThumbIntermediate", "LeftThumbDistal",
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal"
]
right_arm_humanoid_names = [
"RightUpperArm", "RightLowerArm", "RightHand",
"RightThumbProximal", "RightThumbIntermediate", "RightThumbDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal"
]
left_arm_bones = []
right_arm_bones = []
left_upper_arm_bone = None
right_upper_arm_bone = None
# ヒューマノイドボーンを取得
for bone_map in avatar_data.get("humanoidBones", []):
humanoid_name = bone_map.get("humanoidBoneName")
bone_name = bone_map.get("boneName")
if humanoid_name == "LeftUpperArm":
left_upper_arm_bone = bone_name
elif humanoid_name == "RightUpperArm":
right_upper_arm_bone = bone_name
if humanoid_name in left_arm_humanoid_names:
left_arm_bones.append(bone_name)
elif humanoid_name in right_arm_humanoid_names:
right_arm_bones.append(bone_name)
# LeftUpperArmのheadを起点として取得
left_pivot_point = None
if left_upper_arm_bone and left_upper_arm_bone in armature_obj.pose.bones:
left_pivot_point = armature_obj.matrix_world @ armature_obj.pose.bones[left_upper_arm_bone].head
# RightUpperArmのheadを起点として取得
right_pivot_point = None
if right_upper_arm_bone and right_upper_arm_bone in armature_obj.pose.bones:
right_pivot_point = armature_obj.matrix_world @ armature_obj.pose.bones[right_upper_arm_bone].head
# 左腕全体に-45度のY軸回転を適用(LeftUpperArmのheadを起点)
if left_pivot_point:
for bone_name in left_arm_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用(LeftUpperArmのheadを起点)
offset_matrix = mathutils.Matrix.Translation(left_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# 右腕全体に45度のY軸回転を適用(RightUpperArmのheadを起点)
if right_pivot_point:
for bone_name in right_arm_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用(RightUpperArmのheadを起点)
offset_matrix = mathutils.Matrix.Translation(right_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = obj
bpy.context.view_layer.update()
#現在の評価済みメッシュを取得、アーマチュア変形後の状態を保存
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
A_pose_shape_verts = np.array([v.co.copy() for v in eval_mesh.vertices])
# 左腕全体に45度のY軸回転を適用(戻す)(LeftUpperArmのheadを起点)
if left_pivot_point:
for bone_name in left_arm_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用(LeftUpperArmのheadを起点)
offset_matrix = mathutils.Matrix.Translation(left_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# 右腕全体に-45度のY軸回転を適用(戻す)(RightUpperArmのheadを起点)
if right_pivot_point:
for bone_name in right_arm_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用(RightUpperArmのheadを起点)
offset_matrix = mathutils.Matrix.Translation(right_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
restore_shape_key_state(obj, original_shape_key_state)
# 左右の足全体のボーンにY軸回転を適用
print(" 左右の足全体のボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = armature_obj
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesから左右の足全体のboneNameを取得
left_leg_humanoid_names = [
"LeftUpperLeg", "LeftLowerLeg", "LeftFoot"
]
right_leg_humanoid_names = [
"RightUpperLeg", "RightLowerLeg", "RightFoot"
]
left_leg_bones = []
right_leg_bones = []
left_upper_leg_bone = None
right_upper_leg_bone = None
# ヒューマノイドボーンを取得
for bone_map in avatar_data.get("humanoidBones", []):
humanoid_name = bone_map.get("humanoidBoneName")
bone_name = bone_map.get("boneName")
if humanoid_name == "LeftUpperLeg":
left_upper_leg_bone = bone_name
elif humanoid_name == "RightUpperLeg":
right_upper_leg_bone = bone_name
if humanoid_name in left_leg_humanoid_names:
left_leg_bones.append(bone_name)
elif humanoid_name in right_leg_humanoid_names:
right_leg_bones.append(bone_name)
# LeftUpperLegのheadを起点として取得
left_leg_pivot_point = None
if left_upper_leg_bone and left_upper_leg_bone in armature_obj.pose.bones:
left_leg_pivot_point = armature_obj.matrix_world @ armature_obj.pose.bones[left_upper_leg_bone].head
# RightUpperLegのheadを起点として取得
right_leg_pivot_point = None
if right_upper_leg_bone and right_upper_leg_bone in armature_obj.pose.bones:
right_leg_pivot_point = armature_obj.matrix_world @ armature_obj.pose.bones[right_upper_leg_bone].head
# 左足全体に-70度のY軸回転を適用(LeftUpperLegのheadを起点)
if left_leg_pivot_point:
for bone_name in left_leg_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での-70度Y軸回転を適用(LeftUpperLegのheadを起点)
offset_matrix = mathutils.Matrix.Translation(left_leg_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-70), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# 右足全体に70度のY軸回転を適用(RightUpperLegのheadを起点)
if right_leg_pivot_point:
for bone_name in right_leg_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での70度Y軸回転を適用(RightUpperLegのheadを起点)
offset_matrix = mathutils.Matrix.Translation(right_leg_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(70), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = obj
bpy.context.view_layer.update()
#現在の評価済みメッシュを取得、アーマチュア変形後の状態を保存
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
crotch_shape_verts = np.array([v.co.copy() for v in eval_mesh.vertices])
# 左足全体に70度のY軸回転を適用(戻す)(LeftUpperLegのheadを起点)
if left_leg_pivot_point:
for bone_name in left_leg_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での70度Y軸回転を適用(LeftUpperLegのheadを起点)
offset_matrix = mathutils.Matrix.Translation(left_leg_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(70), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# 右足全体に-70度のY軸回転を適用(戻す)(RightUpperLegのheadを起点)
if right_leg_pivot_point:
for bone_name in right_leg_bones:
if bone_name and bone_name in armature_obj.pose.bones:
bone = armature_obj.pose.bones[bone_name]
current_world_matrix = armature_obj.matrix_world @ bone.matrix
# グローバル座標系での-70度Y軸回転を適用(RightUpperLegのheadを起点)
offset_matrix = mathutils.Matrix.Translation(right_leg_pivot_point * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-70), 4, 'Y')
bone.matrix = armature_obj.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
apply_modifiers_keep_shapekeys_with_temp(obj)
if obj.data.shape_keys is None:
obj.shape_key_add(name='Basis')
if is_A_pose:
# 一時シェイプキーを作成
shape_key_forA = obj.shape_key_add(name="WT_shape_forA.MFTemp")
shape_key_forA.value = 0.0
for i in range(len(A_pose_shape_verts)):
shape_key_forA.data[i].co = A_pose_shape_verts[i]
# 一時シェイプキーを作成
shape_key_forCrotch = obj.shape_key_add(name="WT_shape_forCrotch.MFTemp")
shape_key_forCrotch.value = 0.0
for i in range(len(crotch_shape_verts)):
shape_key_forCrotch.data[i].co = crotch_shape_verts[i]
restore_shape_key_state(obj, original_shape_key_state)
def process_missing_bone_weights(base_mesh: bpy.types.Object, clothing_armature: bpy.types.Object,
base_avatar_data: dict, clothing_avatar_data: dict, preserve_optional_humanoid_bones: bool) -> None:
"""
Process weights for humanoid bones that exist in base avatar but not in clothing.
"""
# Get bone names from clothing armature
clothing_bone_names = set(bone.name for bone in clothing_armature.data.bones)
# Create mappings for base avatar
base_humanoid_to_bone = {}
base_bone_to_humanoid = {}
for bone_map in base_avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
base_humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
base_bone_to_humanoid[bone_map["boneName"]] = bone_map["humanoidBoneName"]
# Create mappings for clothing
clothing_humanoid_to_bone = {}
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
clothing_humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
# Create auxiliary bones mapping
aux_bones_map = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
bone_name = base_humanoid_to_bone.get(humanoid_bone)
if bone_name:
aux_bones_map[bone_name] = aux_set["auxiliaryBones"]
# Create parent map from bone hierarchy
parent_map = get_bone_parent_map(base_avatar_data["boneHierarchy"])
# Process each humanoid bone from base avatar
for humanoid_name, bone_name in base_humanoid_to_bone.items():
# Skip if bone exists in clothing armature
if clothing_humanoid_to_bone.get(humanoid_name) in clothing_bone_names:
continue
# Check if this bone should be preserved when preserve_optional_humanoid_bones is True
if preserve_optional_humanoid_bones:
should_preserve = False
# Condition 1: Chest exists in clothing, UpperChest missing in clothing but exists in base
if (humanoid_name == "UpperChest" and
"Chest" in clothing_humanoid_to_bone and
clothing_humanoid_to_bone["Chest"] in clothing_bone_names and
"UpperChest" not in clothing_humanoid_to_bone and
"UpperChest" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving UpperChest bone weights due to Chest condition")
# Condition 2: LeftLowerLeg exists in clothing, LeftFoot missing in clothing but exists in base
elif (humanoid_name == "LeftFoot" and
"LeftLowerLeg" in clothing_humanoid_to_bone and
clothing_humanoid_to_bone["LeftLowerLeg"] in clothing_bone_names and
"LeftFoot" not in clothing_humanoid_to_bone and
"LeftFoot" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving LeftFoot bone weights due to LeftLowerLeg condition")
# Condition 2: RightLowerLeg exists in clothing, RightFoot missing in clothing but exists in base
elif (humanoid_name == "RightFoot" and
"RightLowerLeg" in clothing_humanoid_to_bone and
clothing_humanoid_to_bone["RightLowerLeg"] in clothing_bone_names and
"RightFoot" not in clothing_humanoid_to_bone and
"RightFoot" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving RightFoot bone weights due to RightLowerLeg condition")
# Condition 3: LeftLowerLeg or LeftFoot exists in clothing, LeftToe missing in clothing but exists in base
elif (humanoid_name == "LeftToe" and
(("LeftLowerLeg" in clothing_humanoid_to_bone and clothing_humanoid_to_bone["LeftLowerLeg"] in clothing_bone_names) or
("LeftFoot" in clothing_humanoid_to_bone and clothing_humanoid_to_bone["LeftFoot"] in clothing_bone_names)) and
"LeftToe" not in clothing_humanoid_to_bone and
"LeftToe" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving LeftToe bone weights due to LeftLowerLeg/LeftFoot condition")
# Condition 3: RightLowerLeg or RightFoot exists in clothing, RightToe missing in clothing but exists in base
elif (humanoid_name == "RightToe" and
(("RightLowerLeg" in clothing_humanoid_to_bone and clothing_humanoid_to_bone["RightLowerLeg"] in clothing_bone_names) or
("RightFoot" in clothing_humanoid_to_bone and clothing_humanoid_to_bone["RightFoot"] in clothing_bone_names)) and
"RightToe" not in clothing_humanoid_to_bone and
"RightToe" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving RightToe bone weights due to RightLowerLeg/RightFoot condition")
elif (humanoid_name == "LeftBreast" and
"LeftBreast" not in clothing_humanoid_to_bone and
("Chest" in clothing_humanoid_to_bone or "UpperChest" in clothing_humanoid_to_bone) and
(clothing_humanoid_to_bone["Chest"] in clothing_bone_names or clothing_humanoid_to_bone["UpperChest"] in clothing_bone_names) and
"LeftBreast" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving LeftBreast bone weights due to Chest condition")
elif (humanoid_name == "RightBreast" and
"RightBreast" not in clothing_humanoid_to_bone and
("Chest" in clothing_humanoid_to_bone or "UpperChest" in clothing_humanoid_to_bone) and
(clothing_humanoid_to_bone["Chest"] in clothing_bone_names or clothing_humanoid_to_bone["UpperChest"] in clothing_bone_names) and
"RightBreast" in base_humanoid_to_bone):
should_preserve = True
print(f"Preserving RightBreast bone weights due to Chest condition")
if should_preserve:
print(f"Skipping processing for preserved bone: {humanoid_name} ({bone_name})")
continue
print(f"Processing missing humanoid bone: {humanoid_name} ({bone_name})")
# Find parent that exists in clothing armature
current_bone = bone_name
target_bone = None
while current_bone and not target_bone:
parent_bone = parent_map.get(current_bone)
if not parent_bone:
break
parent_humanoid = base_bone_to_humanoid.get(parent_bone)
if parent_humanoid and clothing_humanoid_to_bone.get(parent_humanoid) in clothing_bone_names:
target_bone = base_humanoid_to_bone[parent_humanoid]
break
current_bone = parent_bone
if target_bone:
# Transfer main bone weights
source_group = base_mesh.vertex_groups.get(bone_name)
if source_group:
merge_weights_to_parent(base_mesh, bone_name, target_bone)
# Transfer auxiliary bone weights
for aux_bone in aux_bones_map.get(bone_name, []):
if aux_bone in base_mesh.vertex_groups:
merge_weights_to_parent(base_mesh, aux_bone, target_bone)
# Remove source groups
if bone_name in base_mesh.vertex_groups:
base_mesh.vertex_groups.remove(base_mesh.vertex_groups[bone_name])
for aux_bone in aux_bones_map.get(bone_name, []):
if aux_bone in base_mesh.vertex_groups:
base_mesh.vertex_groups.remove(base_mesh.vertex_groups[aux_bone])
def update_base_avatar_weights(base_mesh: bpy.types.Object, clothing_armature: bpy.types.Object,
base_avatar_data: dict, clothing_avatar_data: dict, preserve_optional_humanoid_bones: bool) -> None:
"""
Update base avatar weights based on clothing armature structure.
Parameters:
base_mesh: Base avatar mesh object
clothing_armature: Clothing armature object
base_avatar_data: Base avatar data
clothing_avatar_data: Clothing avatar data
"""
# Then process missing bone weights
process_missing_bone_weights(base_mesh, clothing_armature, base_avatar_data, clothing_avatar_data, preserve_optional_humanoid_bones)
def normalize_bone_weights(obj: bpy.types.Object, avatar_data: dict) -> None:
"""
メッシュのボーン変形に関わる頂点ウェイトを正規化する。
Parameters:
obj: メッシュオブジェクト
avatar_data: アバターデータ
"""
if obj.type != 'MESH':
return
# 正規化対象のボーングループを取得
target_groups = set()
# Humanoidボーンを追加
for bone_map in avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
target_groups.add(bone_map["boneName"])
# 補助ボーンを追加
for aux_set in avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set.get("auxiliaryBones", []):
target_groups.add(aux_bone)
# 各頂点について処理
for vert in obj.data.vertices:
# ターゲットグループのウェイト合計を計算
total_weight = 0.0
weights = {}
for g in vert.groups:
group_name = obj.vertex_groups[g.group].name
if group_name in target_groups:
total_weight += g.weight
weights[group_name] = g.weight
# ウェイトの正規化
for group_name, weight in weights.items():
normalized_weight = weight / total_weight
obj.vertex_groups[group_name].add([vert.index], normalized_weight, 'REPLACE')
def create_hinge_bone_group(obj: bpy.types.Object, armature: bpy.types.Object, avatar_data: dict) -> None:
"""
Create a hinge bone group.
"""
bone_groups = get_humanoid_and_auxiliary_bone_groups(avatar_data)
# 衣装アーマチュアのボーングループも含めた対象グループを作成
all_deform_groups = set(bone_groups)
if armature:
all_deform_groups.update(bone.name for bone in armature.data.bones)
# original_groupsからbone_groupsを除いたグループのウェイトを保存
original_non_humanoid_groups = all_deform_groups - bone_groups
cloth_bm = get_evaluated_mesh(obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
vertex_coords = np.array([v.co for v in cloth_bm.verts])
kdtree = cKDTree(vertex_coords)
hinge_bone_group = obj.vertex_groups.new(name="HingeBone")
for bone_name in original_non_humanoid_groups:
bone = armature.pose.bones.get(bone_name)
if bone.parent and bone.parent.name in bone_groups:
group_index = obj.vertex_groups.find(bone_name)
print(f"Processing hinge bone: {bone_name}")
print(f"Bone parent: {bone.parent.name}")
print(f"Group index: {group_index}")
if group_index != -1:
bone_head = armature.matrix_world @ bone.head
neighbor_indices = kdtree.query_ball_point(bone_head, 0.01)
for index in neighbor_indices:
for g in obj.data.vertices[index].groups:
if g.group == group_index:
weight = g.weight
hinge_bone_group.add([index], weight, 'REPLACE')
print(f"Added weight to {index}")
break
def get_humanoid_and_auxiliary_bones(avatar_data: dict) -> set:
"""
Get a set of all humanoid and auxiliary bone names from avatar data.
Parameters:
avatar_data: Avatar data containing bone information
Returns:
Set of bone names
"""
bone_names = set()
# Add humanoid bones
for bone_map in avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
bone_names.add(bone_map["boneName"])
# Add auxiliary bones
for aux_set in avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set.get("auxiliaryBones", []):
bone_names.add(aux_bone)
return bone_names
def copy_bone_transform(source_bone: bpy.types.EditBone, target_bone: bpy.types.EditBone) -> None:
"""
Copy transformation data from source bone to target bone.
Parameters:
source_bone: Source edit bone
target_bone: Target edit bone
"""
target_bone.head = source_bone.head.copy()
target_bone.tail = source_bone.tail.copy()
target_bone.roll = source_bone.roll
target_bone.matrix = source_bone.matrix.copy()
target_bone.length = source_bone.length
def find_humanoid_parent_in_clothing(bone_name: str, clothing_bones_to_humanoid: dict, clothing_armature: bpy.types.Object) -> Optional[str]:
"""
clothing_armatureでboneの親を辿り、最初に見つかるhumanoidボーンを返す
Parameters:
bone_name: 開始ボーン名
clothing_bones_to_humanoid: ボーン名からHumanoidボーン名への変換辞書
clothing_armature: 衣装のアーマチュアオブジェクト
Returns:
Optional[str]: 見つかった親のHumanoidボーン名、見つからない場合はNone
"""
current_bone = clothing_armature.data.bones.get(bone_name)
while current_bone and current_bone.parent:
parent_bone = current_bone.parent
if parent_bone.name in clothing_bones_to_humanoid:
return clothing_bones_to_humanoid[parent_bone.name]
current_bone = parent_bone
return None
def find_humanoid_parent_in_hierarchy(bone_name: str, clothing_avatar_data: dict, base_avatar_data: dict) -> Optional[str]:
"""
clothing_avatar_dataのboneHierarchyでbone_nameから親を辿り、base_armatureにも存在する最初のhumanoidボーンを返す
Parameters:
bone_name: 開始ボーン名
clothing_avatar_data: 衣装のアバターデータ
base_avatar_data: ベースのアバターデータ
Returns:
Optional[str]: 見つかった親のHumanoidボーン名、見つからない場合はNone
"""
# clothing_avatar_dataのhumanoidBonesからbone_nameのhumanoidBoneNameを取得
clothing_bones_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
base_humanoid_bones = {bone_map["humanoidBoneName"] for bone_map in base_avatar_data["humanoidBones"]}
def find_bone_in_hierarchy(hierarchy_node, target_name):
"""階層内でボーンを探す再帰関数"""
if hierarchy_node["name"] == target_name:
return hierarchy_node
for child in hierarchy_node.get("children", []):
result = find_bone_in_hierarchy(child, target_name)
if result:
return result
return None
def find_parent_path(hierarchy_node, target_name, path=[]):
"""ターゲットボーンまでのパスを見つける再帰関数"""
current_path = path + [hierarchy_node["name"]]
if hierarchy_node["name"] == target_name:
return current_path
for child in hierarchy_node.get("children", []):
result = find_parent_path(child, target_name, current_path)
if result:
return result
return None
# boneHierarchyでbone_nameまでのパスを取得
bone_hierarchy = clothing_avatar_data.get("boneHierarchy")
if not bone_hierarchy:
return None
path = find_parent_path(bone_hierarchy, bone_name)
if not path:
return None
# パスを逆順にして親から辿る
path.reverse()
# 自分から親に向かってhumanoidボーンを探す
for parent_bone_name in path:
if parent_bone_name in clothing_bones_to_humanoid:
humanoid_name = clothing_bones_to_humanoid[parent_bone_name]
if humanoid_name in base_humanoid_bones:
return humanoid_name
return None
def replace_humanoid_bones(base_armature: bpy.types.Object, clothing_armature: bpy.types.Object,
base_avatar_data: dict, clothing_avatar_data: dict, preserve_humanoid_bones: bool, base_pose_filepath: Optional[str], clothing_meshes: list, process_upper_chest: bool) -> None:
current_active = bpy.context.active_object
current_mode = current_active.mode if current_active else 'OBJECT'
# Create mappings
base_humanoid_map = {bone_map["humanoidBoneName"]: bone_map["boneName"]
for bone_map in base_avatar_data["humanoidBones"]}
clothing_humanoid_map = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
clothing_bones_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
# Create reverse mapping for finding bones by humanoid names
base_bone_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in base_avatar_data["humanoidBones"]}
# Humanoidボーンの照合
clothing_humanoid_bones = {bone_map["humanoidBoneName"] for bone_map in clothing_avatar_data["humanoidBones"]}
base_humanoid_bones = {bone_map["humanoidBoneName"] for bone_map in base_avatar_data["humanoidBones"]}
# base_avatar_dataに存在しないHumanoidボーンを特定
# missing_humanoid_bones = clothing_humanoid_bones - base_humanoid_bones
missing_humanoid_bones = {}
# Map auxiliary bones to humanoid bones
aux_to_humanoid = {}
for aux_set in clothing_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
# base_avatar_dataに存在しないHumanoidボーンの補助ボーンは除外
if humanoid_bone not in missing_humanoid_bones:
for aux_bone in aux_set["auxiliaryBones"]:
aux_to_humanoid[aux_bone] = humanoid_bone
# Map humanoid bones to auxiliary bones
humanoid_to_aux = {}
for aux_set in clothing_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
# base_avatar_dataに存在しないHumanoidボーンの補助ボーンは除外
if humanoid_bone not in missing_humanoid_bones:
humanoid_to_aux[humanoid_bone] = aux_set["auxiliaryBones"]
humanoid_to_aux_base = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_to_aux_base[aux_set["humanoidBoneName"]] = aux_set["auxiliaryBones"]
# bones_to_replaceからbase_avatar_dataに存在しないHumanoidボーンとその補助ボーンを除外
bones_to_replace = set()
for bone_map in clothing_avatar_data["humanoidBones"]:
if bone_map["humanoidBoneName"] not in missing_humanoid_bones:
bones_to_replace.add(bone_map["boneName"])
for aux_set in clothing_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] not in missing_humanoid_bones:
bones_to_replace.update(aux_set["auxiliaryBones"])
print(f"bones_to_replace: {bones_to_replace}")
base_bones = get_humanoid_and_auxiliary_bone_groups_with_intermediate(base_armature, base_avatar_data)
# Get humanoid bones that should be preserved
if preserve_humanoid_bones:
humanoid_bones_to_preserve = {bone_name for bone_name, humanoid_name
in clothing_bones_to_humanoid.items()
if humanoid_name not in missing_humanoid_bones}
else:
humanoid_bones_to_preserve = set()
# Get base mesh and create BVH tree
base_mesh = bpy.data.objects.get("Body.BaseAvatar")
if not base_mesh:
raise Exception("Body.BaseAvatar not found")
bm = bmesh.new()
bm.from_mesh(base_mesh.data)
bm.faces.ensure_lookup_table()
bm.transform(base_mesh.matrix_world)
bvh = BVHTree.FromBMesh(bm)
# Armatureモディファイアの設定を保存して一時的に削除
armature_modifiers = []
clothing_obj_list = []
for obj in bpy.data.objects:
if obj.type == 'MESH':
for modifier in obj.modifiers[:]: # リストのコピーを使用
if modifier.type == 'ARMATURE' and modifier.object == clothing_armature:
mod_settings = {
'object': obj,
'name': modifier.name,
'target': modifier.object,
'vertex_group': modifier.vertex_group,
'invert_vertex_group': modifier.invert_vertex_group,
'use_vertex_groups': modifier.use_vertex_groups,
'use_bone_envelopes': modifier.use_bone_envelopes,
'use_deform_preserve_volume': modifier.use_deform_preserve_volume
}
armature_modifiers.append(mod_settings)
obj.modifiers.remove(modifier)
clothing_obj_list.append(obj)
if base_pose_filepath:
print(f"Applying clothing base pose from {base_pose_filepath}")
add_pose_from_json(clothing_armature, base_pose_filepath, clothing_avatar_data, invert=True)
apply_pose_as_rest(clothing_armature)
# Get clothing bone positions and their original parents
clothing_bone_data = {}
clothing_matrix_world = clothing_armature.matrix_world
for bone in clothing_armature.pose.bones:
if bone.parent and bone.parent.name in bones_to_replace and bone.name not in bones_to_replace:
head_pos = clothing_matrix_world @ bone.head
# まずbone.parentのhumanoid名を取得
parent_humanoid = None
if bone.parent.name in clothing_humanoid_map:
parent_humanoid = clothing_humanoid_map[bone.parent.name]
elif bone.parent.name in aux_to_humanoid:
parent_humanoid = aux_to_humanoid[bone.parent.name]
# もしparent_humanoidがbase_humanoid_mapに存在しない場合は
# clothing_avatar_dataで親を辿ってbase_avatar_dataにも存在する最初のhumanoidボーンを探す
if parent_humanoid and parent_humanoid not in base_humanoid_map:
# parent_humanoid = find_humanoid_parent_in_clothing(bone.parent.name, clothing_bones_to_humanoid, clothing_armature)
parent_humanoid = find_humanoid_parent_in_hierarchy(bone.parent.name, clothing_avatar_data, base_avatar_data)
if parent_humanoid and parent_humanoid in base_humanoid_map:
# 候補ボーンを取得
candidate_bones = {base_humanoid_map[parent_humanoid]}
if parent_humanoid in humanoid_to_aux_base:
candidate_bones.update(humanoid_to_aux_base[parent_humanoid])
# ChestでUpperChestが存在する場合の処理
sub_parent_humanoid = None
if parent_humanoid == 'Chest' and 'UpperChest' in base_humanoid_map and process_upper_chest:
sub_parent_humanoid = base_humanoid_map['UpperChest']
candidate_bones.add(sub_parent_humanoid)
if 'UpperChest' in humanoid_to_aux_base:
candidate_bones.update(humanoid_to_aux_base['UpperChest'])
clothing_bone_data[bone.name] = {
'head_pos': head_pos,
'candidate_bones': candidate_bones,
'parent_humanoid': base_humanoid_map[parent_humanoid],
'sub_parent_humanoid': sub_parent_humanoid
}
base_group_index_to_name = {group.index: group.name for group in base_mesh.vertex_groups}
# Find parent bones using only the candidate bones
parent_bones = {}
for bone_name, data in clothing_bone_data.items():
head_pos = data['head_pos']
candidate_bones = data['candidate_bones']
parent_humanoid = data['parent_humanoid']
sub_parent_humanoid = data.get('sub_parent_humanoid', None)
# 追加手法: clothing_meshesから対象ボーンのウェイトが一定以上の頂点を取得し、それらに近いbase_mesh頂点から
# candidate_bonesのウェイトスコアを集計する
bone_scores = defaultdict(float)
weighted_vertices = []
for mesh_obj in clothing_meshes:
if mesh_obj.type != 'MESH':
continue
vg_lookup = {vg.name: vg.index for vg in mesh_obj.vertex_groups}
if bone_name not in vg_lookup:
continue
target_group_index = vg_lookup[bone_name]
mesh_data = mesh_obj.data
mesh_world_matrix = mesh_obj.matrix_world
for vertex in mesh_data.vertices:
weight = 0.0
for g in vertex.groups:
if g.group == target_group_index:
weight = g.weight
break
if weight >= 0.001:
vertex_world_co = mesh_world_matrix @ vertex.co
weighted_vertices.append((vertex_world_co, weight))
print(f"bone_name: {bone_name}, weighted_vertices: {len(weighted_vertices)}")
if weighted_vertices:
weighted_vertices.sort(key=lambda item: item[1], reverse=True)
top_vertices = weighted_vertices[:100]
for vertex_world_co, _ in top_vertices:
closest_point, _, face_idx, _ = bvh.find_nearest(vertex_world_co)
if closest_point is None or face_idx is None:
continue
face = bm.faces[face_idx]
vertex_indices = [v.index for v in face.verts]
closest_vert_idx = min(
vertex_indices,
key=lambda idx: (base_mesh.data.vertices[idx].co - closest_point).length
)
vertex = base_mesh.data.vertices[closest_vert_idx]
for group_element in vertex.groups:
group_name = base_group_index_to_name.get(group_element.group)
if group_name in candidate_bones:
bone_scores[group_name] += group_element.weight
chosen_parent = None
if bone_scores:
print(f"bone_scores: {bone_scores}")
chosen_parent = max(bone_scores.items(), key=lambda item: item[1])[0]
# if not chosen_parent:
# # 既存手法: 頂点距離とボーン距離を使用する
# closest_point, normal, face_idx, vertex_distance = bvh.find_nearest(head_pos)
# closest_bone = None
# min_vertex_weight_distance = float('inf')
# if closest_point and face_idx is not None:
# face = bm.faces[face_idx]
# vertex_indices = [v.index for v in face.verts]
# closest_vert_idx = min(vertex_indices,
# key=lambda idx: (base_mesh.data.vertices[idx].co - closest_point).length)
# max_weight = 0
# vertex = base_mesh.data.vertices[closest_vert_idx]
# for group_element in vertex.groups:
# group_name = base_group_index_to_name.get(group_element.group)
# if group_name in candidate_bones:
# weight = group_element.weight
# if weight > max_weight:
# max_weight = weight
# closest_bone = group_name
# min_vertex_weight_distance = vertex_distance
# min_bone_distance = float('inf')
# closest_bone_by_distance = None
# for bone in base_armature.pose.bones:
# if bone.name in candidate_bones:
# bone_head_world = base_armature.matrix_world @ bone.head
# distance = (head_pos - bone_head_world).length
# if distance < min_bone_distance:
# min_bone_distance = distance
# closest_bone_by_distance = bone.name
# if closest_bone_by_distance and min_bone_distance < min_vertex_weight_distance:
# chosen_parent = closest_bone_by_distance
# elif closest_bone:
# chosen_parent = closest_bone
if chosen_parent and chosen_parent == bone_name and bone_name in clothing_armature.data.bones and clothing_armature.data.bones.get(bone_name).parent:
chosen_parent = clothing_armature.data.bones.get(bone_name).parent.name
if chosen_parent not in candidate_bones:
chosen_parent = None
if chosen_parent:
parent_bones[bone_name] = chosen_parent
print(f"bone_name: {bone_name}, chosen_parent: {chosen_parent}")
else:
# chosen_parentが見つからない場合、sub_parent_humanoidがあれば距離を比較
if sub_parent_humanoid:
parent_humanoid_bone = base_armature.pose.bones.get(parent_humanoid)
sub_parent_humanoid_bone = base_armature.pose.bones.get(sub_parent_humanoid)
if parent_humanoid_bone and sub_parent_humanoid_bone:
parent_distance = (head_pos - (base_armature.matrix_world @ parent_humanoid_bone.head)).length
sub_parent_distance = (head_pos - (base_armature.matrix_world @ sub_parent_humanoid_bone.head)).length
if sub_parent_distance < parent_distance:
parent_bones[bone_name] = sub_parent_humanoid
print(f"bone_name: {bone_name}, chosen_parent: {sub_parent_humanoid} (sub_parent, distance: {sub_parent_distance:.4f})")
else:
parent_bones[bone_name] = parent_humanoid
print(f"bone_name: {bone_name}, chosen_parent: {parent_humanoid} (fallback, distance: {parent_distance:.4f})")
else:
parent_bones[bone_name] = parent_humanoid
print(f"bone_name: {bone_name}, chosen_parent: {parent_humanoid} (fallback)")
else:
parent_bones[bone_name] = parent_humanoid
print(f"bone_name: {bone_name}, chosen_parent: {parent_humanoid} (fallback)")
bm.free()
# Replace bones
bpy.context.view_layer.objects.active = clothing_armature
bpy.ops.object.mode_set(mode='EDIT')
clothing_edit_bones = clothing_armature.data.edit_bones
# Store children to update
children_to_update = []
for bone in clothing_edit_bones:
if bone.parent and bone.parent.name in bones_to_replace and bone.name not in bones_to_replace:
children_to_update.append(bone.name)
# Store base bone parents
base_bone_parents = {}
bpy.context.view_layer.objects.active = base_armature
bpy.ops.object.mode_set(mode='EDIT')
for bone in base_armature.data.edit_bones:
if bone.name in base_bones:
base_bone_parents[bone.name] = bone.parent.name if bone.parent and bone.parent.name in base_bones else None
print(base_bone_parents)
bpy.context.view_layer.objects.active = clothing_armature
bpy.ops.object.mode_set(mode='EDIT')
# Process bones to preserve or delete
original_bone_data = {}
for bone_name in bones_to_replace:
if bone_name in clothing_edit_bones:
if bone_name in humanoid_bones_to_preserve:
# Preserve and rename Humanoid bones
orig_bone = clothing_edit_bones[bone_name]
new_name = f"origORS_{bone_name}"
bone_data = {
'head': orig_bone.head.copy(),
'tail': orig_bone.tail.copy(),
'roll': orig_bone.roll,
'matrix': orig_bone.matrix.copy(),
'new_name': new_name,
'humanoid_name': clothing_bones_to_humanoid[bone_name] # Store the humanoid name
}
original_bone_data[bone_name] = bone_data
orig_bone.name = new_name
else:
# Delete non-Humanoid bones
clothing_edit_bones.remove(clothing_edit_bones[bone_name])
# Create new bones
new_bones = {}
for bone_name in base_bones:
source_bone = base_armature.data.edit_bones.get(bone_name)
if source_bone:
new_bone = clothing_edit_bones.new(name=bone_name)
copy_bone_transform(source_bone, new_bone)
new_bones[bone_name] = new_bone
# Set parent relationships for new bones
for bone_name, new_bone in new_bones.items():
parent_name = base_bone_parents.get(bone_name)
if parent_name and parent_name in new_bones:
new_bone.parent = new_bones[parent_name]
# Make original humanoid bones children of new bones based on boneHierarchy
for orig_bone_name, data in original_bone_data.items():
orig_bone = clothing_edit_bones[data['new_name']]
humanoid_name = data['humanoid_name'] # Get the humanoid name for matching
# Find parent using boneHierarchy
parent_humanoid_name = find_humanoid_parent_in_hierarchy(orig_bone_name, clothing_avatar_data, base_avatar_data)
if parent_humanoid_name:
# Find the new bone with matching parent humanoid name
matched_new_bone = None
for new_bone_name, new_bone in new_bones.items():
if new_bone_name in base_bone_to_humanoid:
if base_bone_to_humanoid[new_bone_name] == parent_humanoid_name:
matched_new_bone = new_bone
break
if matched_new_bone:
orig_bone.parent = matched_new_bone
else:
print(f"Warning: No matching new bone found for parent humanoid bone {parent_humanoid_name}")
else:
# Fallback to original matching logic
matched_new_bone = None
for new_bone_name, new_bone in new_bones.items():
if new_bone_name in base_bone_to_humanoid:
if base_bone_to_humanoid[new_bone_name] == humanoid_name:
matched_new_bone = new_bone
break
if matched_new_bone:
orig_bone.parent = matched_new_bone
else:
print(f"Warning: No matching new bone found for humanoid bone {humanoid_name}")
# parent_boneがHumanoidBoneであり、subHumanoidBonesのHumanoidBoneNameに一致するものが存在する場合、subHumanoidBoneの方に入れ替える
if "subHumanoidBones" in base_avatar_data:
sub_humanoid_bones = {}
for sub_humanoid_bone in base_avatar_data["subHumanoidBones"]:
sub_humanoid_bones[sub_humanoid_bone["humanoidBoneName"]] = sub_humanoid_bone["boneName"]
for bone_name, parent_name in parent_bones.items():
if parent_name in base_bone_to_humanoid:
if base_bone_to_humanoid[parent_name] in sub_humanoid_bones.keys():
parent_bones[bone_name] = sub_humanoid_bones[base_bone_to_humanoid[parent_name]]
# Update children parents
for child_name in children_to_update:
child_bone = clothing_edit_bones.get(child_name)
print(f"child_name: {child_name}, child_bone: {child_bone.name if child_bone else None}")
if child_bone:
new_parent_name = parent_bones.get(child_name)
print(f"child_name: {child_name}, new_parent_name: {new_parent_name}")
if new_parent_name and new_parent_name in clothing_edit_bones:
child_bone.parent = clothing_edit_bones[new_parent_name]
print(f"child_name: {child_name}, new_parent_name: {new_parent_name}")
bpy.ops.object.mode_set(mode='OBJECT')
if base_pose_filepath:
print(f"Applying base pose from {base_pose_filepath}")
add_pose_from_json(clothing_armature, base_pose_filepath, base_avatar_data, invert=False)
for obj in clothing_obj_list:
inverse_bone_deform_all_vertices(clothing_armature, obj)
add_pose_from_json(clothing_armature, base_pose_filepath, base_avatar_data, invert=True)
apply_pose_as_rest(clothing_armature)
# Armatureモディファイアを復元
for mod_settings in armature_modifiers:
obj = mod_settings['object']
modifier = obj.modifiers.new(name=mod_settings['name'], type='ARMATURE')
modifier.object = mod_settings['target']
modifier.vertex_group = mod_settings['vertex_group']
modifier.invert_vertex_group = mod_settings['invert_vertex_group']
modifier.use_vertex_groups = mod_settings['use_vertex_groups']
modifier.use_bone_envelopes = mod_settings['use_bone_envelopes']
modifier.use_deform_preserve_volume = mod_settings['use_deform_preserve_volume']
bpy.context.view_layer.objects.active = current_active
if current_mode != 'OBJECT':
bpy.ops.object.mode_set(mode=current_mode)
def load_vertex_group(obj, filepath):
with open(filepath, 'r', encoding='utf-8') as f:
payload = json.load(f)
group_name = payload.get("vertex_group_name")
weights = payload.get("weights", [])
if not group_name:
print("JSON に頂点グループ名が含まれていません。")
return group_name
vg = obj.vertex_groups.get(group_name)
if vg is None:
vg = obj.vertex_groups.new(name=group_name)
else:
indices = [v.index for v in obj.data.vertices]
vg.remove(indices)
missing_vertices = []
for record in weights:
vidx = record.get("vertex_index")
weight = record.get("weight")
if vidx is None or weight is None:
continue
if vidx >= len(obj.data.vertices):
missing_vertices.append(vidx)
continue
vg.add([vidx], weight, 'REPLACE')
obj.vertex_groups.active = vg
print(f"{group_name} を {filepath} から復元しました。")
if missing_vertices:
print(f"存在しない頂点インデックス: {missing_vertices}")
return group_name
def reset_shape_keys(obj):
# オブジェクトにシェイプキーがあるか確認
if obj.data.shape_keys is not None:
# シェイプキーのキーブロックをループ
for kb in obj.data.shape_keys.key_blocks:
# ベースシェイプ(Basis)以外の値を0にする
if kb.name != "Basis":
kb.value = 0.0
def normalize_vertex_weights(obj):
"""
指定されたメッシュオブジェクトのボーンウェイトを正規化する。
Args:
obj: 正規化するメッシュオブジェクト
"""
if obj.type != 'MESH':
print(f"Error: {obj.name} is not a mesh object")
return
# 頂点グループが存在するか確認
if not obj.vertex_groups:
print(f"Warning: {obj.name} has no vertex groups")
return
# 各頂点が少なくとも1つのグループに属しているか確認
for vert in obj.data.vertices:
if not vert.groups:
print(f"Warning: Vertex {vert.index} in {obj.name} has no weights")
# Armatureモディファイアの確認
has_armature = any(mod.type == 'ARMATURE' for mod in obj.modifiers)
if not has_armature:
print(f"Error: {obj.name} has no Armature modifier")
return
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
# アクティブオブジェクトを設定
bpy.context.view_layer.objects.active = obj
bpy.ops.object.mode_set(mode='OBJECT')
obj.select_set(True)
bpy.context.view_layer.objects.active = obj
# ウェイトの正規化を実行
bpy.ops.object.vertex_group_normalize_all(
group_select_mode='BONE_DEFORM',
lock_active=False
)
print(f"Normalized weights for {obj.name}")
def merge_auxiliary_to_humanoid_weights(mesh_obj: bpy.types.Object, avatar_data: dict) -> None:
"""Create missing Humanoid bone vertex groups and merge auxiliary weights."""
# Map auxiliary bones to their Humanoid bones
aux_to_humanoid = {}
for aux_set in avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
bone_name = None
# Get the actual bone name for the Humanoid bone
for bone_map in avatar_data.get("humanoidBones", []):
if bone_map["humanoidBoneName"] == humanoid_bone:
bone_name = bone_map["boneName"]
break
if bone_name:
for aux_bone in aux_set["auxiliaryBones"]:
aux_to_humanoid[aux_bone] = bone_name
# Check each auxiliary bone vertex group
for aux_bone in aux_to_humanoid:
if aux_bone in mesh_obj.vertex_groups:
humanoid_bone = aux_to_humanoid[aux_bone]
# Create Humanoid bone group if it doesn't exist
if humanoid_bone not in mesh_obj.vertex_groups:
print(f"Creating missing Humanoid bone group {humanoid_bone} for {mesh_obj.name}")
mesh_obj.vertex_groups.new(name=humanoid_bone)
# Get the vertex groups
aux_group = mesh_obj.vertex_groups[aux_bone]
humanoid_group = mesh_obj.vertex_groups[humanoid_bone]
# Transfer weights from auxiliary to humanoid group
for vert in mesh_obj.data.vertices:
aux_weight = 0
for group in vert.groups:
if group.group == aux_group.index:
aux_weight = group.weight
break
if aux_weight > 0:
# Add weight to humanoid bone group
humanoid_group.add([vert.index], aux_weight, 'ADD')
# Remove auxiliary bone vertex group
mesh_obj.vertex_groups.remove(aux_group)
print(f"Merged weights from {aux_bone} to {humanoid_bone} in {mesh_obj.name}")
def save_vertex_weights(mesh_obj: bpy.types.Object) -> dict:
"""
オブジェクトの全頂点グループのウェイトを記録する(空のグループも含む)
Parameters:
mesh_obj: メッシュオブジェクト
Returns:
保存されたウェイト情報のディクショナリ(vertex_weights、existing_groups、vertex_ids)
"""
weights_data = {
'vertex_weights': {},
'existing_groups': set(),
'vertex_ids': {}
}
# 全ての既存の頂点グループ名を記録
for group in mesh_obj.vertex_groups:
weights_data['existing_groups'].add(group.name)
# 頂点に整数型のカスタム属性を作成(既に存在する場合は削除して再作成)
mesh = mesh_obj.data
custom_attr_name = "original_vertex_id"
# 既存のカスタム属性を削除
if custom_attr_name in mesh.attributes:
mesh.attributes.remove(mesh.attributes[custom_attr_name])
# 新しい整数型カスタム属性を作成
custom_attr = mesh.attributes.new(name=custom_attr_name, type='INT', domain='POINT')
# 各頂点のウェイトと頂点IDを記録
for vert in mesh.vertices:
vertex_weights = {}
for group in vert.groups:
group_name = mesh_obj.vertex_groups[group.group].name
vertex_weights[group_name] = group.weight
# 頂点のウェイトを記録(空の場合も記録)
weights_data['vertex_weights'][vert.index] = vertex_weights
# カスタム属性に現在の頂点IDを設定
custom_attr.data[vert.index].value = vert.index
# weights_dataにも頂点IDを記録
weights_data['vertex_ids'][vert.index] = vert.index
print(f"Saved vertex weights for {len(mesh.vertices)} vertices with original IDs in {mesh_obj.name}")
return weights_data
def restore_vertex_weights(mesh_obj: bpy.types.Object, weights_data: dict) -> None:
"""
保存されたウェイト情報を使って頂点グループのウェイトを復元する
カスタム属性を使用して頂点IDの対応を管理
Parameters:
mesh_obj: メッシュオブジェクト
weights_data: save_vertex_weights()で保存されたウェイト情報
"""
vertex_weights = weights_data['vertex_weights']
original_groups = weights_data['existing_groups']
saved_vertex_ids = weights_data.get('vertex_ids', {})
# 現在存在するグループのうち、元々存在しなかったグループを削除
current_groups = set(group.name for group in mesh_obj.vertex_groups)
groups_to_remove = current_groups - original_groups
for group_name in groups_to_remove:
if group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.remove(mesh_obj.vertex_groups[group_name])
print(f"Removed vertex group {group_name} from {mesh_obj.name}")
# 元々存在していたグループが削除されている場合は再作成
for group_name in original_groups:
if group_name not in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.new(name=group_name)
# まず全ての頂点グループから全頂点を削除
for group in mesh_obj.vertex_groups:
group.remove(list(range(len(mesh_obj.data.vertices))))
# カスタム属性から頂点IDの対応を取得
mesh = mesh_obj.data
custom_attr_name = "original_vertex_id"
if custom_attr_name not in mesh.attributes:
print(f"Warning: Custom attribute '{custom_attr_name}' not found in {mesh_obj.name}. Using direct index mapping.")
# カスタム属性がない場合は従来の方法でインデックスを直接使用
for vert_index, vertex_weights_dict in vertex_weights.items():
if vert_index < len(mesh.vertices):
for group_name, weight in vertex_weights_dict.items():
if group_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[group_name]
group.add([vert_index], weight, 'REPLACE')
return
# カスタム属性を取得
custom_attr = mesh.attributes[custom_attr_name]
# 現在の頂点の元の頂点IDを取得してマッピングを作成
current_to_original_mapping = {}
for current_vert in mesh.vertices:
original_id = custom_attr.data[current_vert.index].value
current_to_original_mapping[current_vert.index] = original_id
print(f"Restoring vertex weights using custom attribute mapping for {len(mesh.vertices)} vertices in {mesh_obj.name}")
# 保存されたウェイトを復元(カスタム属性を使用して対応を取る)
restored_count = 0
for current_vert_index, original_vert_id in current_to_original_mapping.items():
if original_vert_id in vertex_weights:
vertex_weights_dict = vertex_weights[original_vert_id]
for group_name, weight in vertex_weights_dict.items():
if group_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[group_name]
group.add([current_vert_index], weight, 'REPLACE')
restored_count += 1
print(f"Successfully restored weights for {restored_count} vertices in {mesh_obj.name}")
def get_bone_name_from_humanoid(avatar_data: dict, humanoid_bone_name: str) -> str:
"""
humanoidBoneNameから実際のボーン名を取得する
Parameters:
avatar_data: アバターデータ
humanoid_bone_name: ヒューマノイドボーン名
Returns:
実際のボーン名、見つからない場合はNone
"""
for bone_map in avatar_data.get("humanoidBones", []):
if bone_map["humanoidBoneName"] == humanoid_bone_name:
return bone_map["boneName"]
return None
def merge_vertex_group_weights(mesh_obj: bpy.types.Object, source_group_name: str, target_group_name: str) -> None:
"""
指定された頂点グループのウェイトを別のグループに統合する
Parameters:
mesh_obj: メッシュオブジェクト
source_group_name: 統合元のグループ名
target_group_name: 統合先のグループ名
"""
if source_group_name not in mesh_obj.vertex_groups or target_group_name not in mesh_obj.vertex_groups:
return
source_group = mesh_obj.vertex_groups[source_group_name]
target_group = mesh_obj.vertex_groups[target_group_name]
# 各頂点のウェイトを統合
for vert in mesh_obj.data.vertices:
source_weight = 0
for group in vert.groups:
if group.group == source_group.index:
source_weight = group.weight
break
if source_weight > 0:
# ターゲットグループにウェイトを加算
target_group.add([vert.index], source_weight, 'ADD')
def process_bone_weight_consolidation(mesh_obj: bpy.types.Object, avatar_data: dict) -> None:
"""
指定されたルールに従ってボーンウェイトを統合する
Parameters:
mesh_obj: メッシュオブジェクト
avatar_data: アバターデータ
"""
# UpperChest -> Chest への統合
upper_chest_bone = get_bone_name_from_humanoid(avatar_data, "UpperChest")
chest_bone = get_bone_name_from_humanoid(avatar_data, "Chest")
if upper_chest_bone and chest_bone and upper_chest_bone in mesh_obj.vertex_groups:
# Chestグループが存在しない場合は作成
if chest_bone not in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.new(name=chest_bone)
merge_vertex_group_weights(mesh_obj, upper_chest_bone, chest_bone)
print(f"Merged {upper_chest_bone} weights to {chest_bone} in {mesh_obj.name}")
# 胸ボーン -> Chest への統合
breasts_humanoid_bones = [
"LeftBreasts",
"RightBreasts"
]
if chest_bone:
# Chestグループが存在しない場合は作成
if chest_bone not in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.new(name=chest_bone)
for breasts_humanoid in breasts_humanoid_bones:
breasts_bone = get_bone_name_from_humanoid(avatar_data, breasts_humanoid)
if breasts_bone and breasts_bone in mesh_obj.vertex_groups:
merge_vertex_group_weights(mesh_obj, breasts_bone, chest_bone)
print(f"Merged {breasts_bone} weights to {chest_bone} in {mesh_obj.name}")
# Left足指系ボーン -> LeftFoot への統合
left_foot_bone = get_bone_name_from_humanoid(avatar_data, "LeftFoot")
left_toe_humanoid_bones = [
"LeftToes",
"LeftFootThumbProximal",
"LeftFootThumbIntermediate",
"LeftFootThumbDistal",
"LeftFootIndexProximal",
"LeftFootIndexIntermediate",
"LeftFootIndexDistal",
"LeftFootMiddleProximal",
"LeftFootMiddleIntermediate",
"LeftFootMiddleDistal",
"LeftFootRingProximal",
"LeftFootRingIntermediate",
"LeftFootRingDistal",
"LeftFootLittleProximal",
"LeftFootLittleIntermediate",
"LeftFootLittleDistal"
]
if left_foot_bone:
# LeftFootグループが存在しない場合は作成
if left_foot_bone not in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.new(name=left_foot_bone)
for toe_humanoid in left_toe_humanoid_bones:
toe_bone = get_bone_name_from_humanoid(avatar_data, toe_humanoid)
if toe_bone and toe_bone in mesh_obj.vertex_groups:
merge_vertex_group_weights(mesh_obj, toe_bone, left_foot_bone)
print(f"Merged {toe_bone} weights to {left_foot_bone} in {mesh_obj.name}")
# Right足指系ボーン -> RightFoot への統合
right_foot_bone = get_bone_name_from_humanoid(avatar_data, "RightFoot")
right_toe_humanoid_bones = [
"RightToes",
"RightFootThumbProximal",
"RightFootThumbIntermediate",
"RightFootThumbDistal",
"RightFootIndexProximal",
"RightFootIndexIntermediate",
"RightFootIndexDistal",
"RightFootMiddleProximal",
"RightFootMiddleIntermediate",
"RightFootMiddleDistal",
"RightFootRingProximal",
"RightFootRingIntermediate",
"RightFootRingDistal",
"RightFootLittleProximal",
"RightFootLittleIntermediate",
"RightFootLittleDistal"
]
if right_foot_bone:
# RightFootグループが存在しない場合は作成
if right_foot_bone not in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.new(name=right_foot_bone)
for toe_humanoid in right_toe_humanoid_bones:
toe_bone = get_bone_name_from_humanoid(avatar_data, toe_humanoid)
if toe_bone and toe_bone in mesh_obj.vertex_groups:
merge_vertex_group_weights(mesh_obj, toe_bone, right_foot_bone)
print(f"Merged {toe_bone} weights to {right_foot_bone} in {mesh_obj.name}")
def get_deformation_bone_groups(avatar_data: dict) -> list:
"""
Get list of bone groups for deformation mask from avatar data,
excluding Head and its auxiliary bones.
Parameters:
avatar_data: Avatar data containing bone information
Returns:
List of bone names for deformation mask
"""
bone_groups = set()
# Get mapping of humanoid bones
for bone_map in avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
# Skip Head bone
if bone_map["humanoidBoneName"] != "Head":
bone_groups.add(bone_map["boneName"])
# Get auxiliary bones mapping
for aux_set in avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
# Skip Head's auxiliary bones
if humanoid_bone != "Head":
aux_bones = aux_set["auxiliaryBones"]
bone_groups.update(aux_bones)
return sorted(list(bone_groups))
def create_deformation_mask(obj: bpy.types.Object, avatar_data: dict) -> None:
"""
Create deformation mask vertex group based on avatar data.
Parameters:
obj: Mesh object to process
avatar_data: Avatar data containing bone information
"""
# 入力チェック
if obj.type != 'MESH':
print(f"Error: {obj.name} is not a mesh object")
return
# Get bone groups from avatar data
group_names = get_deformation_bone_groups(avatar_data)
# TransferMaskという名前の頂点グループがすでに存在する場合は削除
if "DeformationMask" in obj.vertex_groups:
obj.vertex_groups.remove(obj.vertex_groups["DeformationMask"])
# 新しい頂点グループを作成
deformation_mask = obj.vertex_groups.new(name="DeformationMask")
# 各頂点をチェック
for vert in obj.data.vertices:
should_add = False
weight_sum = 0.0
# 指定された頂点グループのウェイトをチェック
for group_name in group_names:
try:
group = obj.vertex_groups[group_name]
# その頂点のウェイト値を取得
weight = 0
for g in vert.groups:
if g.group == group.index:
weight = g.weight
# ウェイトが0より大きければフラグを立てる
if weight > 0:
should_add = True
weight_sum += weight
except KeyError:
# 頂点グループが存在しない場合はスキップ
continue
# フラグが立っている場合、DeformationMaskグループに頂点を追加
if should_add:
deformation_mask.add([vert.index], weight_sum, 'REPLACE')
def create_field_distance_vertex_group(obj, field_data_path, group_name="FieldDistanceWeights", batch_size=1000, k=8):
"""
Deformation Fieldの近傍頂点のWeight値を使用して頂点グループを作成
各頂点のウェイト値は1.0 - (Fieldの近傍頂点のWeight値の重み付き平均)とする
バッチ処理を行い高速化する
Parameters:
obj: メッシュオブジェクト
field_data_path: Deformation Fieldデータのパス
group_name: 作成する頂点グループの名前
batch_size: バッチサイズ
k: k近傍法のk値
Returns:
作成した頂点グループ
"""
# Deformation Fieldデータを取得
field_info = get_deformation_field(field_data_path)
field_points = field_info['field_points']
field_weights = field_info['field_weights']
field_matrix = field_info['world_matrix']
field_matrix_inv = field_info['world_matrix_inv']
kdtree = field_info['kdtree']
# 頂点グループが既に存在する場合は削除
if group_name in obj.vertex_groups:
obj.vertex_groups.remove(obj.vertex_groups[group_name])
# 新しい頂点グループを作成
vertex_group = obj.vertex_groups.new(name=group_name)
# 評価済みメッシュの頂点位置を取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
vertices = np.array([v.co for v in eval_mesh.vertices])
num_vertices = len(vertices)
# 結果を格納する配列
vertex_weights = np.zeros(num_vertices)
# バッチ処理
for start_idx in range(0, num_vertices, batch_size):
end_idx = min(start_idx + batch_size, num_vertices)
batch_vertices = vertices[start_idx:end_idx]
# バッチ内の頂点をフィールド空間に変換
batch_world = np.array([eval_obj.matrix_world @ Vector(v) for v in batch_vertices])
batch_field = np.array([field_matrix_inv @ Vector(v) for v in batch_world])
# k近傍検索(バッチ処理)
distances, indices = kdtree.query(batch_field, k=k)
# 各頂点のフィールドウェイトを計算
for i, (dist, idx) in enumerate(zip(distances, indices)):
# 距離による重み付け
weights = 1.0 / (dist + 0.0001) # 0除算を防止
weights = weights / weights.sum() # 正規化
# フィールドウェイトの重み付き平均
field_weight = np.sum(field_weights[idx] * weights)
# 結果を保存 (1.0 - フィールドウェイト)
vertex_weights[start_idx + i] = 1.0 - field_weight
# 頂点グループにウェイトを設定
for i, weight in enumerate(vertex_weights):
if weight > 0:
vertex_group.add([i], weight, 'REPLACE')
print(f"Created field distance vertex group '{group_name}' for {obj.name} using k={k} neighbors and batch processing")
return vertex_group
def create_overlapping_vertices_attributes(clothing_meshes, base_avatar_data, distance_threshold=0.0001, edge_angle_threshold=3, weight_similarity_threshold=0.1, overlap_attr_name="Overlapped", world_pos_attr_name="OriginalWorldPosition"):
"""
ワールド座標上でほぼ重なっていて、ウェイトパターンが類似している頂点を検出し、
カスタム頂点属性としてフラグ(1.0)を設定する。またワールド頂点座標も別の属性として保存する。
Parameters:
clothing_meshes: 処理対象の衣装メッシュのリスト
base_avatar_data: ベースアバターデータ
distance_threshold: 重なっていると判定する距離の閾値
weight_similarity_threshold: ウェイトパターンの類似性閾値(小さいほど厳密)
overlap_attr_name: 重なり頂点フラグ用のカスタム属性の名前
world_pos_attr_name: ワールド座標を保存するカスタム属性の名前
"""
print(f"Creating custom attributes for overlapping vertices with similar weight patterns...")
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# 各メッシュに対して処理
for mesh_obj in clothing_meshes:
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = mesh_obj.evaluated_get(depsgraph)
mesh = eval_obj.data
bm = bmesh.new()
bm.from_mesh(mesh)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
# 頂点インデックスのマッピングを作成(BMesh内のインデックス → 元のメッシュのインデックス)
vert_indices = {v.index: i for i, v in enumerate(bm.verts)}
# 頂点データを収集
all_vertices = []
for vert_idx, vert in enumerate(bm.verts):
# 頂点のワールド座標を計算
world_pos = mesh_obj.matrix_world @ vert.co
# 頂点に接続するエッジの方向ベクトルを収集
edge_directions = []
bm_vert = bm.verts[vert_idx]
for edge in bm_vert.link_edges:
other_vert = edge.other_vert(bm_vert)
direction = (other_vert.co - bm_vert.co).normalized()
edge_directions.append(direction)
# 対象グループのウェイトを収集
weights = {}
orig_vert = mesh_obj.data.vertices[vert_indices[vert_idx]]
for group_name in target_groups:
if group_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[group_name]
for g in orig_vert.groups:
if g.group == group.index:
weights[group_name] = g.weight
break
# 頂点データを保存
all_vertices.append({
'vert_idx': vert_idx,
'world_pos': world_pos,
'edge_directions': edge_directions,
'weights': weights
})
# KDTreeを構築して近接頂点を効率的に検索
positions = [v['world_pos'] for v in all_vertices]
kdtree = KDTree(len(positions))
for i, pos in enumerate(positions):
kdtree.insert(pos, i)
kdtree.balance()
# 重なり頂点用のカスタム属性を作成または取得
if overlap_attr_name not in mesh_obj.data.attributes:
mesh_obj.data.attributes.new(name=overlap_attr_name, type='FLOAT', domain='POINT')
overlap_attr = mesh_obj.data.attributes[overlap_attr_name]
# ワールド座標用のカスタム属性を作成または取得
if world_pos_attr_name not in mesh_obj.data.attributes:
mesh_obj.data.attributes.new(name=world_pos_attr_name, type='FLOAT_VECTOR', domain='POINT')
pos_attr = mesh_obj.data.attributes[world_pos_attr_name]
# 初期値を設定 (重なり属性は0、ワールド座標は現在の位置)
for i, vertex in enumerate(mesh_obj.data.vertices):
overlap_attr.data[i].value = 0.0
world_position = mesh_obj.matrix_world @ vertex.co
pos_attr.data[i].vector = world_position
# 重なっている頂点を検出してフラグを設定
processed = set() # 処理済みの頂点インデックスを記録
cluster_id = 0 # クラスタID(デバッグ用)
for i, vert_data in enumerate(all_vertices):
mesh_vertex_idx = vert_indices[all_vertices[i]['vert_idx']]
world_pos = all_vertices[i]['world_pos']
pos_attr.data[mesh_vertex_idx].vector = world_pos # ワールド座標を保存
if i in processed:
continue
# 近接頂点を検索
overlapping_indices = []
for (co, idx, dist) in kdtree.find_range(vert_data['world_pos'], distance_threshold):
if idx != i and idx not in processed: # 自分自身と処理済みの頂点は除外
# エッジ方向の類似性をチェック
if check_edge_direction_similarity(vert_data['edge_directions'], all_vertices[idx]['edge_directions'], edge_angle_threshold):
# ウェイトパターンの類似性をチェック
similarity = calculate_weight_pattern_similarity(
vert_data['weights'], all_vertices[idx]['weights'])
# 類似性が閾値以上の場合のみ追加
if similarity >= (1.0 - weight_similarity_threshold):
overlapping_indices.append(idx)
if not overlapping_indices:
continue
# 重なっている頂点グループを含める
overlapping_indices.append(i)
processed.add(i)
# 重なっている頂点の属性を設定
for vert_idx in overlapping_indices:
mesh_vertex_idx = vert_indices[all_vertices[vert_idx]['vert_idx']]
overlap_attr.data[mesh_vertex_idx].value = 1.0 # 重なり検出フラグを設定
processed.add(vert_idx)
cluster_id += 1
# BMeshを解放
bm.free()
mesh_obj.data.update()
print(f"Created custom attributes '{overlap_attr_name}' and '{world_pos_attr_name}' for {mesh_obj.name} with {cluster_id} overlapping vertex clusters")
print(f"Distance threshold: {distance_threshold}")
print(f"Weight similarity threshold: {weight_similarity_threshold}")
def create_overlapping_vertices_uvmap(clothing_meshes, base_avatar_data, distance_threshold=0.0001, edge_angle_threshold=3, weight_similarity_threshold=0.1, uv_name="OverlappingVertices", circle_center=(0.0, 0.0), circle_radius=10.0):
"""
ワールド座標上でほぼ重なっていて、ウェイトパターンが類似している頂点を検出し、新しいUVマップ上で円周上に重ねて保存する
Parameters:
clothing_meshes: 処理対象の衣装メッシュのリスト
base_avatar_data: ベースアバターデータ
distance_threshold: 重なっていると判定する距離の閾値
weight_similarity_threshold: ウェイトパターンの類似性閾値(小さいほど厳密)
uv_name: 作成するUVマップの名前
circle_center: UV座標の円の中心座標 (x, y)
circle_radius: UV座標の円の半径
"""
print(f"Creating UV map for overlapping vertices with similar weight patterns, name: {uv_name}...")
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# 各メッシュに対して処理
for mesh_obj in clothing_meshes:
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = mesh_obj.evaluated_get(depsgraph)
mesh = eval_obj.data
bm = bmesh.new()
bm.from_mesh(mesh)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
# 頂点インデックスのマッピングを作成(BMesh内のインデックス → 元のメッシュのインデックス)
vert_indices = {v.index: i for i, v in enumerate(bm.verts)}
# 頂点データを収集
all_vertices = []
for vert_idx, vert in enumerate(bm.verts):
# 頂点のワールド座標を計算
world_pos = mesh_obj.matrix_world @ vert.co
# 頂点に接続するエッジの方向ベクトルを収集
edge_directions = []
bm_vert = bm.verts[vert_idx]
for edge in bm_vert.link_edges:
other_vert = edge.other_vert(bm_vert)
direction = (other_vert.co - bm_vert.co).normalized()
edge_directions.append(direction)
# 対象グループのウェイトを収集
weights = {}
orig_vert = mesh_obj.data.vertices[vert_indices[vert_idx]]
for group_name in target_groups:
if group_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[group_name]
for g in orig_vert.groups:
if g.group == group.index:
weights[group_name] = g.weight
break
# 頂点データを保存
all_vertices.append({
'vert_idx': vert_idx,
'world_pos': world_pos,
'edge_directions': edge_directions,
'weights': weights
})
# KDTreeを構築して近接頂点を効率的に検索
positions = [v['world_pos'] for v in all_vertices]
kdtree = KDTree(len(positions))
for i, pos in enumerate(positions):
kdtree.insert(pos, i)
kdtree.balance()
# 新しいUVマップを作成または取得
if uv_name not in mesh_obj.data.uv_layers:
mesh_obj.data.uv_layers.new(name=uv_name)
uv_layer = mesh_obj.data.uv_layers[uv_name]
# すべてのUV座標を原点に初期化
for uv_data in uv_layer.data:
uv_data.uv = (0.0, 0.0)
# 重なっている頂点を検出してUV座標を設定
processed = set() # 処理済みの頂点インデックスを記録
cluster_id = 0 # クラスタID(UV座標の配置に使用)
for i, vert_data in enumerate(all_vertices):
if i in processed:
continue
# 近接頂点を検索
overlapping_indices = []
for (co, idx, dist) in kdtree.find_range(vert_data['world_pos'], distance_threshold):
if idx != i and idx not in processed: # 自分自身と処理済みの頂点は除外
# エッジ方向の類似性をチェック
if check_edge_direction_similarity(vert_data['edge_directions'], all_vertices[idx]['edge_directions'], edge_angle_threshold):
# ウェイトパターンの類似性をチェック
similarity = calculate_weight_pattern_similarity(
vert_data['weights'], all_vertices[idx]['weights'])
# 類似性が閾値以上の場合のみ追加
if similarity >= (1.0 - weight_similarity_threshold):
overlapping_indices.append(idx)
if not overlapping_indices:
continue
# 重なっている頂点グループを含める
overlapping_indices.append(i)
processed.add(i)
# 円周上の位置を計算
angle = 2 * math.pi * cluster_id / max(1, len(all_vertices) // 10) # クラスタ数に応じて角度を分配
uv_x = circle_center[0] + circle_radius * math.cos(angle)
uv_y = circle_center[1] + circle_radius * math.sin(angle)
# 重なっている頂点のUV座標を設定
for vert_idx in overlapping_indices:
for loop in mesh_obj.data.loops:
if loop.vertex_index == vert_indices[all_vertices[vert_idx]['vert_idx']]:
uv_layer.data[loop.index].uv = (uv_x, uv_y)
processed.add(vert_idx)
cluster_id += 1
# BMeshを解放
bm.free()
mesh_obj.data.update()
print(f"Created UV map '{uv_name}' for {mesh_obj.name} with {cluster_id} overlapping vertex clusters")
print(f"UV circle: center {circle_center}, radius {circle_radius}")
print(f"Weight similarity threshold: {weight_similarity_threshold}")
# ウェイトパターンの類似性を計算する関数
def calculate_weight_pattern_similarity(weights1, weights2):
"""
2つのウェイトパターン間の類似性を計算する
Parameters:
weights1: 1つ目のウェイトパターン {group_name: weight}
weights2: 2つ目のウェイトパターン {group_name: weight}
Returns:
float: 類似度(0.0〜1.0、1.0が完全一致)
"""
# 両方のパターンに存在するグループを取得
all_groups = set(weights1.keys()) | set(weights2.keys())
if not all_groups:
return 0.0
# 各グループのウェイト差の合計を計算
total_diff = 0.0
for group in all_groups:
w1 = weights1.get(group, 0.0)
w2 = weights2.get(group, 0.0)
total_diff += abs(w1 - w2)
# 正規化(グループ数で割る)
normalized_diff = total_diff / len(all_groups)
# 類似度に変換(差が小さいほど類似度が高い)
similarity = 1.0 - min(normalized_diff, 1.0)
return similarity
def get_vertex_groups_and_weights(mesh_obj, vertex_index):
"""頂点の所属する頂点グループとウェイトを取得"""
groups = {}
vertex = mesh_obj.data.vertices[vertex_index]
for g in vertex.groups:
group_name = mesh_obj.vertex_groups[g.group].name
groups[group_name] = g.weight
return groups
def get_armature_from_modifier(mesh_obj):
"""Armatureモディファイアからアーマチュアを取得"""
for modifier in mesh_obj.modifiers:
if modifier.type == 'ARMATURE':
return modifier.object
return None
def calculate_inverse_pose_matrix(mesh_obj, armature_obj, vertex_index):
"""指定された頂点のポーズ逆行列を計算"""
# 頂点グループとウェイトの取得
weights = get_vertex_groups_and_weights(mesh_obj, vertex_index)
if not weights:
print(f"頂点 {vertex_index} にウェイトが割り当てられていません")
return None
# 最終的な変換行列の初期化
final_matrix = Matrix.Identity(4)
final_matrix.zero()
total_weight = 0
# 各ボーンの影響を計算
for bone_name, weight in weights.items():
if weight > 0 and bone_name in armature_obj.data.bones:
bone = armature_obj.data.bones[bone_name]
pose_bone = armature_obj.pose.bones.get(bone_name)
if bone and pose_bone:
# ボーンの最終的な行列を計算
mat = armature_obj.matrix_world @ \
pose_bone.matrix @ \
bone.matrix_local.inverted() @ \
armature_obj.matrix_world.inverted()
# ウェイトを考慮して行列を加算
final_matrix += mat * weight
total_weight += weight
# ウェイトの合計で正規化
if total_weight > 0:
final_matrix = final_matrix * (1.0 / total_weight)
# 逆行列を計算して返す
try:
return final_matrix.inverted()
except Exception as e:
print(f"error: {e}")
return Matrix.Identity(4)
def inverse_bone_deform_all_vertices(armature_obj, mesh_obj):
"""
メッシュオブジェクトの評価後の頂点のワールド座標から、
現在のArmatureオブジェクトのポーズの逆変換をすべての頂点に対して行う
Parameters:
armature_obj: Armatureオブジェクト
mesh_obj: メッシュオブジェクト
Returns:
np.ndarray: すべての頂点の逆変換後の座標(ローカル座標)
通常のボーン変形: 変形後 = Σ(weight_i × bone_matrix_i) × 変形前
この関数の逆変換: 変形前 = [Σ(weight_i × bone_matrix_i)]^(-1) × 変形後
"""
if not armature_obj or armature_obj.type != 'ARMATURE':
raise ValueError("有効なArmatureオブジェクトを指定してください")
if not mesh_obj or mesh_obj.type != 'MESH':
raise ValueError("有効なメッシュオブジェクトを指定してください")
# ワールド座標に変換(変形後の頂点位置)
vertices = [v.co.copy() for v in mesh_obj.data.vertices]
# 結果を格納するリスト
inverse_transformed_vertices = []
print(f"ボーン変形の逆変換を開始: {len(vertices)}頂点")
# 各頂点に対して逆変換を適用
for vertex_index in range(len(vertices)):
pos = vertices[vertex_index]
# 頂点のボーンウェイトを取得
weights = get_vertex_groups_and_weights(mesh_obj, vertex_index)
if not weights:
# ウェイトがない場合はそのまま追加
print(f"警告: 頂点 {vertex_index} のウェイトがないため、単位行列を使用します")
inverse_transformed_vertices.append(pos)
continue
# ウェイト付き合成変形行列を計算
combined_matrix = Matrix.Identity(4)
combined_matrix.zero()
total_weight = 0.0
for bone_name, weight in weights.items():
if weight > 0 and bone_name in armature_obj.data.bones:
bone = armature_obj.data.bones[bone_name]
pose_bone = armature_obj.pose.bones.get(bone_name)
if bone and pose_bone:
# ボーンの変形行列を計算
# この行列は、レストポーズからポーズ後への変形を表す
bone_matrix = pose_bone.matrix @ \
bone.matrix_local.inverted()
# ウェイトを考慮して行列を加算
combined_matrix += bone_matrix * weight
total_weight += weight
# ウェイトの合計で正規化
if total_weight > 0:
combined_matrix = combined_matrix * (1.0 / total_weight)
else:
# ウェイトがない場合は単位行列
print(f"警告: 頂点 {vertex_index} のウェイトがないため、単位行列を使用します")
combined_matrix = Matrix.Identity(4)
# 合成行列の逆行列を計算
try:
inverse_matrix = combined_matrix.inverted()
except:
# 逆行列が計算できない場合は単位行列を使用
inverse_matrix = Matrix.Identity(4)
print(f"警告: 頂点 {vertex_index} の逆行列を計算できませんでした")
# 逆変換を適用
# inverse_matrix を適用して「レストポーズのローカル座標」を取得
rest_pose_pos = inverse_matrix @ pos
inverse_transformed_vertices.append(rest_pose_pos)
# 進捗表示(1000頂点ごと)
if (vertex_index + 1) % 1000 == 0:
print(f"進捗: {vertex_index + 1}/{len(vertices)} 頂点処理完了")
print(f"ボーン変形の逆変換が完了しました")
# 変形後の頂点をメッシュに適用
if mesh_obj.data.shape_keys:
for shape_key in mesh_obj.data.shape_keys.key_blocks:
if shape_key.name != "Basis":
for i, vert in enumerate(shape_key.data):
vert.co += inverse_transformed_vertices[i] - vertices[i]
basis_shape_key = mesh_obj.data.shape_keys.key_blocks["Basis"]
for i, vert in enumerate(basis_shape_key.data):
vert.co = inverse_transformed_vertices[i]
for vertex_index, pos in enumerate(inverse_transformed_vertices):
mesh_obj.data.vertices[vertex_index].co = pos
# numpy配列に変換して返す(Vector型からnumpy配列へ)
result = np.array([[v[0], v[1], v[2]] for v in inverse_transformed_vertices])
return result
def batch_process_vertices_multi_step(vertices, all_field_points, all_delta_positions, field_weights,
field_matrix, field_matrix_inv, target_matrix, target_matrix_inv,
deform_weights=None, rbf_epsilon=0.00001, batch_size=1000, k=8):
"""
多段階のDeformation Fieldを使用して頂点を処理する(SaveAndApplyFieldAuto.pyのapply_field_dataと同様)
Parameters:
vertices: 処理対象の頂点配列
all_field_points: 各ステップのフィールドポイント配列
all_delta_positions: 各ステップのデルタポジション配列
field_weights: フィールドウェイト
field_matrix: フィールドマトリックス
field_matrix_inv: フィールドマトリックスの逆行列
target_matrix: ターゲットマトリックス
target_matrix_inv: ターゲットマトリックスの逆行列
rbf_epsilon: RBF補間のイプシロン値
batch_size: バッチサイズ
k: 近傍点数
Returns:
変形後の頂点配列(ワールド座標)
"""
num_vertices = len(vertices)
num_steps = len(all_field_points)
# 累積変位を初期化
cumulative_displacements = np.zeros((num_vertices, 3))
# 現在の頂点位置(ワールド座標)を保存
current_world_positions = np.array([target_matrix @ Vector(v) for v in vertices])
# もしdeform_weightsがNoneの場合は、全ての頂点のウェイトを1.0とする
if deform_weights is None:
deform_weights = np.ones(num_vertices)
# 各ステップの変位を累積的に適用
for step in range(num_steps):
field_points = all_field_points[step]
delta_positions = all_delta_positions[step]
print(f"ステップ {step+1}/{num_steps} の変形を適用中...")
print(f"使用するフィールド頂点数: {len(field_points)}")
# KDTreeを使用して近傍点を検索(各ステップで新しいKDTreeを構築)
kdtree = cKDTree(field_points)
# カスタムRBF補間で新しい頂点位置を計算
step_displacements = np.zeros((num_vertices, 3))
for start_idx in range(0, num_vertices, batch_size):
end_idx = min(start_idx + batch_size, num_vertices)
batch_weights = deform_weights[start_idx:end_idx]
# バッチ内の全頂点をフィールド空間に変換(現在の累積変位を考慮)
batch_world = current_world_positions[start_idx:end_idx].copy()
batch_field = np.array([field_matrix_inv @ Vector(v) for v in batch_world])
# 各頂点ごとに逆距離加重法で補間
batch_displacements = np.zeros((len(batch_field), 3))
for i, point in enumerate(batch_field):
# 近傍点を検索(最大k点)
k_use = min(k, len(field_points))
distances, indices = kdtree.query(point, k=k_use)
# 距離が0の場合(完全に一致する点がある場合)
if distances[0] < 1e-10:
batch_displacements[i] = delta_positions[indices[0]]
continue
# 逆距離の重みを計算
weights = 1.0 / np.sqrt(distances**2 + rbf_epsilon**2)
# 重みの正規化
weights /= np.sum(weights)
# 重み付き平均で変位を計算
weighted_deltas = delta_positions[indices] * weights[:, np.newaxis]
batch_displacements[i] = np.sum(weighted_deltas, axis=0) * batch_weights[i]
# ワールド空間での変位を計算
for i, displacement in enumerate(batch_displacements):
world_displacement = field_matrix.to_3x3() @ Vector(displacement)
step_displacements[start_idx + i] = world_displacement
# 現在のワールド位置を更新(次のステップのために)
current_world_positions[start_idx + i] += world_displacement
# このステップの変位を累積変位に追加
cumulative_displacements += step_displacements
#print(f"ステップ {step+1} 完了: 最大変位 {np.max(np.linalg.norm(step_displacements, axis=1)):.6f}")
# 最終的な変形後の位置を返す
final_world_positions = np.array([target_matrix @ Vector(v) for v in vertices]) + cumulative_displacements
return final_world_positions
def batch_process_vertices_with_custom_range(vertices, all_field_points, all_delta_positions, field_weights,
field_matrix, field_matrix_inv, target_matrix, target_matrix_inv,
start_value, end_value,
deform_weights=None, rbf_epsilon=0.00001, batch_size=1000, k=8):
"""
任意の値の範囲でフィールドによる変形を行う
Parameters:
vertices: 処理対象の頂点配列
all_field_points: 各ステップのフィールドポイント配列
all_delta_positions: 各ステップのデルタポジション配列
field_weights: フィールドウェイト
field_matrix: フィールドマトリックス
field_matrix_inv: フィールドマトリックスの逆行列
target_matrix: ターゲットマトリックス
target_matrix_inv: ターゲットマトリックスの逆行列
start_value: 開始値(シェイプキー値)
end_value: 終了値(シェイプキー値)
deform_weights: 変形ウェイト
rbf_epsilon: RBF補間のイプシロン値
batch_size: バッチサイズ
k: 近傍点数
Returns:
変形後の頂点配列(ワールド座標)
"""
num_vertices = len(vertices)
num_steps = len(all_field_points)
# 累積変位を初期化
cumulative_displacements = np.zeros((num_vertices, 3))
# 現在の頂点位置(ワールド座標)を保存
current_world_positions = np.array([target_matrix @ Vector(v) for v in vertices])
# もしdeform_weightsがNoneの場合は、全ての頂点のウェイトを1.0とする
if deform_weights is None:
deform_weights = np.ones(num_vertices)
# ステップごとの値を計算
step_size = 1.0 / num_steps
# 各ステップで処理
processed_steps = []
for step in range(num_steps):
step_start = step * step_size
step_end = (step + 1) * step_size
# start_valueからend_valueに増加(start_value < end_value)
if step_start + 0.00001 <= end_value and step_end - 0.00001 >= start_value:
processed_steps.append((step, step_start, step_end))
print(f"処理対象ステップ: {len(processed_steps)}")
# 各ステップの変位を累積的に適用
for step_idx, (step, step_start, step_end) in enumerate(processed_steps):
field_points = all_field_points[step].copy()
delta_positions = all_delta_positions[step].copy()
original_delta_positions = all_delta_positions[step].copy()
print(f"ステップ {step_idx+1}/{len(processed_steps)} (step {step}) の変形を適用中...")
print(f"ステップ値範囲: {step_start:.3f} -> {step_end:.3f}")
print(f"使用するフィールド頂点数: {len(field_points)}")
# 任意の値からの変形
if start_value != step_start:
if start_value >= step_start + 0.00001:
# 開始値がステップの開始値より大きい場合
adjustment_factor = (start_value - step_start) / step_size
adjustment_delta = original_delta_positions * adjustment_factor
field_points += adjustment_delta
delta_positions -= adjustment_delta
if end_value != step_end:
if end_value <= step_end - 0.00001:
# 終了値がステップの終了値より小さい場合
adjustment_factor = (step_end - end_value) / step_size
adjustment_delta = original_delta_positions * adjustment_factor
delta_positions -= adjustment_delta
# KDTreeを使用して近傍点を検索
kdtree = cKDTree(field_points)
# カスタムRBF補間で新しい頂点位置を計算
step_displacements = np.zeros((num_vertices, 3))
for start_idx in range(0, num_vertices, batch_size):
end_idx = min(start_idx + batch_size, num_vertices)
batch_weights = deform_weights[start_idx:end_idx]
# バッチ内の全頂点をフィールド空間に変換
batch_world = current_world_positions[start_idx:end_idx].copy()
batch_field = np.array([field_matrix_inv @ Vector(v) for v in batch_world])
# 各頂点ごとに逆距離加重法で補間
batch_displacements = np.zeros((len(batch_field), 3))
for i, point in enumerate(batch_field):
# 近傍点を検索(最大k点)
k_use = min(k, len(field_points))
distances, indices = kdtree.query(point, k=k_use)
# 距離が0の場合(完全に一致する点がある場合)
if distances[0] < 1e-10:
batch_displacements[i] = delta_positions[indices[0]]
continue
# 逆距離の重みを計算
weights = 1.0 / np.sqrt(distances**2 + rbf_epsilon**2)
# 重みの正規化
weights /= np.sum(weights)
# 重み付き平均で変位を計算
weighted_deltas = delta_positions[indices] * weights[:, np.newaxis]
batch_displacements[i] = np.sum(weighted_deltas, axis=0) * batch_weights[i]
# ワールド空間での変位を計算
for i, displacement in enumerate(batch_displacements):
world_displacement = field_matrix.to_3x3() @ Vector(displacement)
step_displacements[start_idx + i] = world_displacement
# 現在のワールド位置を更新(次のステップのために)
current_world_positions[start_idx + i] += world_displacement
# このステップの変位を累積変位に追加
cumulative_displacements += step_displacements
print(f"ステップ {step_idx+1} 完了")
# 最終的な変形後の位置を返す
final_world_positions = np.array([target_matrix @ Vector(v) for v in vertices]) + cumulative_displacements
return final_world_positions
def batch_process_vertices(vertices, kdtree, field_points, delta_positions, field_weights,
field_matrix, field_matrix_inv, target_matrix, target_matrix_inv,
deform_weights=None, batch_size=1000, k=8):
"""
頂点をバッチで処理
"""
num_vertices = len(vertices)
results = np.zeros((num_vertices, 3))
# もしdeform_weightsがNoneの場合は、全ての頂点のウェイトを1.0とする
if deform_weights is None:
deform_weights = np.ones(num_vertices)
rbf_epsilon = 0.00001
for start_idx in range(0, num_vertices, batch_size):
end_idx = min(start_idx + batch_size, num_vertices)
batch_vertices = vertices[start_idx:end_idx]
batch_weights = deform_weights[start_idx:end_idx]
# バッチ内の全頂点をフィールド空間に変換
batch_world = np.array([target_matrix @ Vector(v) for v in batch_vertices])
batch_field = np.array([field_matrix_inv @ Vector(v) for v in batch_world])
# 最近接点の検索(バッチ処理)
# distances, indices = kdtree.query(batch_field, k=int(k))
distances, indices = kdtree.query(batch_field, k=27)
# 各頂点の変位を計算
for i, (vert_field, dist, idx) in enumerate(zip(batch_field, distances, indices)):
# 重み付き変位を計算
weights = 1.0 / np.sqrt(dist**2 + rbf_epsilon**2)
if weights.sum() > 0.0:
weights /= weights.sum()
else:
weights *= 0
deltas = delta_positions[idx]
displacement = (deltas * weights[:, np.newaxis]).sum(axis=0) * batch_weights[i]
# ワールド空間での変位を計算
world_displacement = field_matrix.to_3x3() @ Vector(displacement)
results[start_idx + i] = batch_world[i] + world_displacement
return results
def get_child_bones_recursive(bone_name: str, armature_obj: bpy.types.Object, clothing_avatar_data: dict = None, is_root: bool = True) -> set:
"""
指定されたボーンのすべての子ボーンを再帰的に取得する
最初に指定されたボーンではないHumanoidボーンとそれ以降の子ボーンは除外する
Parameters:
bone_name: 親ボーンの名前
armature_obj: アーマチュアオブジェクト
clothing_avatar_data: 衣装のアバターデータ(Humanoidボーンの判定に使用)
is_root: 最初に指定されたボーンかどうか
Returns:
set: 子ボーンの名前のセット
"""
children = set()
if bone_name not in armature_obj.data.bones:
return children
# Humanoidボーンの判定用マッピングを作成
humanoid_bones = set()
if clothing_avatar_data:
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
humanoid_bones.add(bone_map["boneName"])
bone = armature_obj.data.bones[bone_name]
for child in bone.children:
# 最初に指定されたボーンではないHumanoidボーンの場合、そのボーンとその子ボーンを除外
if not is_root and child.name in humanoid_bones:
# このボーンとその子ボーンをスキップ
continue
children.add(child.name)
children.update(get_child_bones_recursive(child.name, armature_obj, clothing_avatar_data, False))
return children
def create_blendshape_mask(target_obj, mask_bones, clothing_avatar_data, field_name="", store_debug_mask=True):
"""
指定されたボーンとその子ボーンのウェイトを合算したマスクを作成する
Parameters:
target_obj: 対象のメッシュオブジェクト
mask_bones: マスクに使用するHumanoidボーンのリスト
clothing_avatar_data: 衣装アバターのデータ(Humanoidボーン名の変換に使用)
field_name: フィールド名(デバッグ用の頂点グループ名に使用)
store_debug_mask: デバッグ用のマスク頂点グループを保存するかどうか
Returns:
numpy.ndarray: 各頂点のマスクウェイト値の配列
"""
#print(f"mask_bones: {mask_bones}")
mask_weights = np.zeros(len(target_obj.data.vertices))
# アーマチュアを取得
armature_obj = None
for modifier in target_obj.modifiers:
if modifier.type == 'ARMATURE':
armature_obj = modifier.object
break
if not armature_obj:
print(f"Warning: No armature found for {target_obj.name}")
return mask_weights
# Humanoidボーン名からボーン名への変換マップを作成
humanoid_to_bone = {}
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
# 補助ボーンのマッピングを作成
auxiliary_bones = {}
for aux_set in clothing_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
auxiliary_bones[humanoid_bone] = aux_set["auxiliaryBones"]
# デバッグ用に処理したボーンの情報を収集
processed_bones = set()
# 対象となるすべてのボーンを収集(Humanoidボーン、補助ボーン、それらの子ボーン)
target_bones = set()
# 各Humanoidボーンに対して処理
for humanoid_bone in mask_bones:
# メインのボーンを追加
bone_name = humanoid_to_bone.get(humanoid_bone)
if bone_name:
target_bones.add(bone_name)
processed_bones.add(bone_name)
# 子ボーンを追加
target_bones.update(get_child_bones_recursive(bone_name, armature_obj, clothing_avatar_data))
# 補助ボーンとその子ボーンを追加
if humanoid_bone in auxiliary_bones:
for aux_bone in auxiliary_bones[humanoid_bone]:
target_bones.add(aux_bone)
processed_bones.add(aux_bone)
# 補助ボーンの子ボーンを追加
target_bones.update(get_child_bones_recursive(aux_bone, armature_obj, clothing_avatar_data))
#print(f"target_bones: {target_bones}")
# 各頂点のウェイトを計算
for vert in target_obj.data.vertices:
for bone_name in target_bones:
if bone_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[bone_name]
for g in vert.groups:
if g.group == group.index:
mask_weights[vert.index] += g.weight
break
# ウェイトを0-1の範囲にクランプ
mask_weights = np.clip(mask_weights, 0.0, 1.0)
# デバッグ用の頂点グループを作成
if store_debug_mask:
# 頂点グループ名を生成
group_name = f"DEBUG_Mask_{field_name}" if field_name else "DEBUG_Mask"
# 既存のグループがあれば削除
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups.remove(target_obj.vertex_groups[group_name])
# 新しいグループを作成
debug_group = target_obj.vertex_groups.new(name=group_name)
# ウェイトを設定
for vert_idx, weight in enumerate(mask_weights):
if weight > 0:
debug_group.add([vert_idx], weight, 'REPLACE')
print(f"Created debug mask group '{group_name}' using bones: {sorted(processed_bones)}")
return mask_weights
# --------------------------------------------------------------------
# BVHを用いた交差判定
# --------------------------------------------------------------------
def cross2d(u: Vector, v: Vector) -> float:
"""2Dベクトルの外積"""
return u.y * v.x - u.x * v.y
def point_in_triangle2d(p: Vector, a: Vector, b: Vector, c: Vector) -> bool:
"""点が2D三角形内にあるかチェック"""
pab = cross2d(p - a, b - a)
pbc = cross2d(p - b, c - b)
if pab * pbc < 0:
return False
pca = cross2d(p - c, a - c)
if pab * pca < 0:
return False
return True
def signed_2d_tri_area(a: Vector, b: Vector, c: Vector) -> float:
"""2D三角形の符号付き面積"""
return (a.x - c.x) * (b.y - c.y) - (a.y - c.y) * (b.x - c.x)
def test_2d_segment_segment(a: Vector, b: Vector, c: Vector, d: Vector) -> bool:
"""2D線分同士の交差判定"""
a1 = signed_2d_tri_area(a, b, d)
a2 = signed_2d_tri_area(a, b, c)
if a1 * a2 < 0.0:
a3 = signed_2d_tri_area(c, d, a)
a4 = a3 + a2 - a1
if a3 * a4 < 0.0:
return True
return False
def project_triangle_2d(triangle: list[Vector], normal: Vector) -> list[Vector]:
"""三角形を2D平面に投影"""
if abs(normal.x) >= abs(normal.y) and abs(normal.x) >= abs(normal.z):
# YZ平面
return [Vector((v.y, v.z)) for v in triangle]
elif abs(normal.y) >= abs(normal.z):
# XZ平面
return [Vector((v.x, v.z)) for v in triangle]
else:
# XY平面
return [Vector((v.x, v.y)) for v in triangle]
def triangle_area(triangle: list[Vector]) -> float:
a = (triangle[1] - triangle[0]).length
b = (triangle[2] - triangle[1]).length
c = (triangle[0] - triangle[2]).length
s = (a + b + c) / 2 # 半周長
# 浮動小数点の誤差による負の値を防ぐため max(..., 0) とする
area_val = max(s * (s - a) * (s - b) * (s - c), 0)
area = math.sqrt(area_val)
return area
def is_degenerate_triangle(triangle: list[Vector], epsilon: float = 1e-6) -> bool:
"""三角形が縮退しているかチェック"""
area = triangle_area(triangle)
return area < epsilon
def calc_triangle_normal(triangle: list[Vector]) -> Vector:
"""三角形の法線を計算(面積で重み付け)"""
v1 = triangle[1] - triangle[0]
v2 = triangle[2] - triangle[0]
normal = v1.cross(v2)
length = normal.length
if length > 1e-8: # 数値的な安定性のため
return normal / length
return Vector((0, 0, 0))
def intersect_triangle_triangle(t1: list[Vector], t2: list[Vector]) -> bool:
"""三角形同士の交差判定(数値誤差に注意)"""
EPSILON2 = 1e-6 # 数値計算の許容値
# 縮退した三角形のチェック
if is_degenerate_triangle(t1, EPSILON2) or is_degenerate_triangle(t2, EPSILON2):
return False
# 法線計算(面積で重み付け)
n1 = calc_triangle_normal(t1)
n2 = calc_triangle_normal(t2)
# 法線がゼロベクトルの場合(無効な三角形)
if n1.length < EPSILON2 or n2.length < EPSILON2:
return False
# 平面の方程式の定数項
d1_const = -n1.dot(t1[0])
d2_const = -n2.dot(t2[0])
# 各頂点と相手の平面との距離を計算
dist1 = [n2.dot(v) + d2_const for v in t1]
dist2 = [n1.dot(v) + d1_const for v in t2]
# 全頂点が同じ側にある場合は交差なし
if all(d >= 0 for d in dist1) or all(d <= 0 for d in dist1):
return False
if all(d >= 0 for d in dist2) or all(d <= 0 for d in dist2):
return False
# 内部関数:辺と平面の交点を計算
def compute_intersection_points(triangle, dists):
pts = []
for i in range(3):
j = (i + 1) % 3
di = dists[i]
dj = dists[j]
# 頂点が平面上にある場合も含む
if abs(di) < 1e-8:
pts.append(triangle[i])
if di * dj < 0:
t = di / (di - dj)
pt = triangle[i] + t * (triangle[j] - triangle[i])
pts.append(pt)
elif abs(dj) < 1e-8:
pts.append(triangle[j])
# 重複する点を除去
unique_pts = []
for p in pts:
if not any((p - q).length < 1e-8 for q in unique_pts):
unique_pts.append(p)
return unique_pts
pts1 = compute_intersection_points(t1, dist1)
pts2 = compute_intersection_points(t2, dist2)
# 交点が2点未満なら交差していないとみなす
if len(pts1) < 2 or len(pts2) < 2:
return False
# 共通線の方向を決定
d = n1.cross(n2)
if d.length < 1e-8:
# ほぼ同一平面上の場合は、このメソッドでは処理しない
return False
d.normalize()
# 交点を共通線上に射影して区間を求める
s1 = [d.dot(p) for p in pts1]
s2 = [d.dot(p) for p in pts2]
seg1_min, seg1_max = min(s1), max(s1)
seg2_min, seg2_max = min(s2), max(s2)
# 区間の重なりをチェック
if seg1_max < seg2_min or seg2_max < seg1_min:
return False
return True
def are_faces_adjacent(face1, face2):
"""2つの面が隣接しているかをチェック"""
verts1 = set(v.index for v in face1.verts)
verts2 = set(v.index for v in face2.verts)
return len(verts1.intersection(verts2)) > 0
def get_face_area(face) -> float:
"""面の面積を計算"""
if len(face.verts) == 3:
triangle = [v.co for v in face.verts]
return triangle_area(triangle)
else: # 四角形の場合
triangles = [
[face.verts[0].co, face.verts[1].co, face.verts[2].co],
[face.verts[0].co, face.verts[2].co, face.verts[3].co]
]
return sum(triangle_area(tri) for tri in triangles)
def is_face_too_small(face, min_area: float = 1e-8) -> bool:
"""面が小さすぎるかチェック"""
return get_face_area(face) < min_area
def get_face_thickness(face, normal: Vector) -> float:
"""面の厚みを計算"""
verts = [v.co for v in face.verts]
min_z = min(v.dot(normal) for v in verts)
max_z = max(v.dot(normal) for v in verts)
return max_z - min_z
def find_intersecting_faces_bvh(obj):
"""
BVHを用いてメッシュ内の自己交差を検出する。
各面(3角形または4角形)はまず三角形に分割し、
それぞれの三角形のバウンディングボックスに基づき候補ペアを
BVHで絞り込んだ上で、詳細な三角形交差判定を行う。
隣接面(頂点共有)は除外しています。
"""
# 評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_obj = obj.evaluated_get(depsgraph)
evaluated_mesh = evaluated_obj.data
# 作業用のBMeshを作成
bm = bmesh.new()
bm.from_mesh(evaluated_mesh)
bm.faces.ensure_lookup_table()
bm.transform(obj.matrix_world)
# 各面から「三角形」リストを作成
triangles = [] # 各要素は [Vector, Vector, Vector]
face_map = [] # 各三角形が元々属していた面のインデックス
face_vertex_sets = [] # 各三角形の元の面の頂点インデックス集合(隣接面チェック用)
for face in bm.faces:
if len(face.verts) not in [3, 4]:
continue
vertex_set = {v.index for v in face.verts}
if len(face.verts) == 3:
tri = [v.co.copy() for v in face.verts]
triangles.append(tri)
face_map.append(face.index)
face_vertex_sets.append(vertex_set)
elif len(face.verts) == 4:
# 対角線の長さにより分割方法を選択
v = [v.co.copy() for v in face.verts]
diag1 = (v[2] - v[0]).length_squared
diag2 = (v[3] - v[1]).length_squared
if diag1 < diag2:
tri1 = [v[0], v[1], v[2]]
tri2 = [v[0], v[2], v[3]]
else:
tri1 = [v[0], v[1], v[3]]
tri2 = [v[1], v[2], v[3]]
triangles.append(tri1)
face_map.append(face.index)
face_vertex_sets.append(vertex_set)
triangles.append(tri2)
face_map.append(face.index)
face_vertex_sets.append(vertex_set)
# BVHツリー作成用の頂点リストと三角形(ポリゴン)リストを構築
bvh_verts = []
bvh_polys = []
offset = 0
for tri in triangles:
bvh_verts.extend(tri) # 各三角形は独立の頂点集合として追加(同じ頂点でも複製)
bvh_polys.append((offset, offset+1, offset+2))
offset += 3
# BVHツリーを作成
epsilon = 1e-6
bvh_tree = BVHTree.FromPolygons(bvh_verts, bvh_polys, epsilon=epsilon)
# BVH同士のオーバーラップから候補ペアを取得
candidate_pairs = bvh_tree.overlap(bvh_tree)
intersecting_face_indices = set()
for i, j in candidate_pairs:
# 重複判定を避けるため i < j の組のみ処理
if i >= j:
continue
face_i = face_map[i]
face_j = face_map[j]
# 同じ面の場合は除外
if face_i == face_j:
continue
# 隣接面(頂点を共有している)は除外
if face_vertex_sets[i].intersection(face_vertex_sets[j]):
continue
tri1 = triangles[i]
tri2 = triangles[j]
if intersect_triangle_triangle(tri1, tri2):
intersecting_face_indices.add(face_i)
intersecting_face_indices.add(face_j)
# BMeshをクリーンアップ
bm.free()
return intersecting_face_indices
def get_new_intersections(obj, original_intersections):
"""
変形後に新たに発生した交差を検出(BVH版)。
元々の交差面の組み合わせを除外して返します。
"""
current_intersections = find_intersecting_faces_bvh(obj)
new_intersections = current_intersections - original_intersections
return new_intersections
def find_intersecting_faces_between(clothing_obj, base_obj):
"""
clothing_obj と base_obj の交差面(clothing_obj側の面インデックス)を
BVHTree を用いて高速に検出し、さらに intersect_triangle_triangle を用いて
詳細な交差判定を行う。
"""
# 評価済みオブジェクトからメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
cloth_eval = clothing_obj.evaluated_get(depsgraph)
base_eval = base_obj.evaluated_get(depsgraph)
mesh_cloth = cloth_eval.to_mesh()
mesh_base = base_eval.to_mesh()
# BMesh に変換して三角形化
bm_cloth = bmesh.new()
bm_cloth.from_mesh(mesh_cloth)
bmesh.ops.triangulate(bm_cloth, faces=bm_cloth.faces[:])
bm_cloth.faces.ensure_lookup_table()
bm_cloth.transform(clothing_obj.matrix_world)
bm_base = bmesh.new()
bm_base.from_mesh(mesh_base)
bmesh.ops.triangulate(bm_base, faces=bm_base.faces[:])
bm_base.faces.ensure_lookup_table()
bm_base.transform(base_obj.matrix_world)
# BVHTree を構築する際、面情報(元の face.index)とのマッピングも同時に作成する
def build_bvh_with_face_mapping(bm):
verts = [v.co.copy() for v in bm.verts]
polys = []
face_indices = []
for face in bm.faces:
if len(face.verts) == 3:
polys.append([v.index for v in face.verts])
face_indices.append(face.index)
bvh = BVHTree.FromPolygons(verts, polys, epsilon=0.0001)
return bvh, face_indices, verts, polys
bvh_base, base_face_indices, base_verts, base_polys = build_bvh_with_face_mapping(bm_base)
bvh_cloth, cloth_face_indices, cloth_verts, cloth_polys = build_bvh_with_face_mapping(bm_cloth)
# BVHTree.overlap により、両方のツリー間で重なっている三角形ペアを取得
candidate_pairs = bvh_base.overlap(bvh_cloth)
intersecting_faces = set()
# 候補ペアについて、詳細な交差判定(intersect_triangle_triangle)を実施
for base_idx, cloth_idx in candidate_pairs:
base_tri = [base_verts[i] for i in base_polys[base_idx]]
cloth_tri = [cloth_verts[i] for i in cloth_polys[cloth_idx]]
if intersect_triangle_triangle(base_tri, cloth_tri):
# 衣装側の三角形に対応する元の面インデックスを記録
intersecting_faces.add(cloth_face_indices[cloth_idx])
# 後始末
bm_cloth.free()
bm_base.free()
cloth_eval.to_mesh_clear()
base_eval.to_mesh_clear()
return intersecting_faces
def duplicate_geometry_with_positions(obj, new_positions):
"""対象オブジェクトを複製し、頂点座標を new_positions に書き換えたオブジェクトを返す"""
new_obj = obj.copy()
new_obj.data = obj.data.copy()
bpy.context.scene.collection.objects.link(new_obj)
mesh = new_obj.data
armature_obj = get_armature_from_modifier(obj)
if new_obj.data.shape_keys is None:
new_obj.shape_key_add(name='Basis')
tmp_shape_key = new_obj.shape_key_add(name="temp_deformation_shapekey")
tmp_shape_key.value = 1.0
matrix_armature_inv_fallback = Matrix.Identity(4)
for i, v in enumerate(mesh.vertices):
if armature_obj is not None:
matrix_armature_inv = calculate_inverse_pose_matrix(new_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(new_positions[i])
tmp_shape_key.data[i].co = new_obj.matrix_world.inverted() @ undeformed_world_pos
matrix_armature_inv_fallback = matrix_armature_inv
else:
tmp_shape_key.data[i].co = new_obj.matrix_world.inverted() @ Vector(new_positions[i])
new_obj.data.update()
return new_obj
# 中央値より指定された倍率以上のエッジを細分化します。
def subdivide_long_edges(obj, min_edge_length=0.005, max_edge_length_ratio=2.0, cuts=1):
"""
指定されたオブジェクトの中央値エッジ長より指定された倍率以上のエッジを細分化します。
"""
mesh = obj.data
had_custom_normals = mesh.has_custom_normals
if not obj or obj.type != 'MESH':
print("無効なオブジェクトです")
return
if len(obj.data.vertices) == 0:
print("メッシュに頂点がありません")
return
# --- 細分化前にCustom Split Normalsを保存(cKDTree版) ---
orig_normals_per_vertex = {}
kd = None
if had_custom_normals:
# 各ループを1度の走査で、頂点ごとに法線リストを作成
temp_normals = {i: [] for i in range(len(mesh.vertices))}
for loop in mesh.loops:
temp_normals[loop.vertex_index].append(loop.normal)
for v_idx, normals in temp_normals.items():
if normals:
avg = Vector((0.0, 0.0, 0.0))
for n in normals:
avg += n
if avg.length > 1e-8:
avg.normalize()
orig_normals_per_vertex[v_idx] = avg.copy()
# 各頂点の座標をNumPy配列にまとめ、cKDTreeを構築
points = np.array([v.co[:] for v in mesh.vertices])
kd = cKDTree(points)
try:
# --- BMeshを用いた細分化処理 ---
bm = bmesh.new()
bm.from_mesh(mesh)
bm.edges.ensure_lookup_table()
# 全エッジの長さを計算して中央値を求める
edge_lengths = []
for edge in bm.edges:
if edge.calc_length() >= min_edge_length:
edge_lengths.append(edge.calc_length())
if not edge_lengths:
print("エッジが見つかりません")
bm.free()
return
# エッジ長をソートして中央値を計算
edge_lengths.sort()
n = len(edge_lengths)
if n % 2 == 0:
# 偶数個の場合は中央2つの値の平均
median_edge_length = (edge_lengths[n//2 - 1] + edge_lengths[n//2]) / 2
else:
# 奇数個の場合は中央の値
median_edge_length = edge_lengths[n//2]
threshold_length = median_edge_length * max_edge_length_ratio
print(f"中央値エッジ長: {median_edge_length:.6f}")
print(f"細分化閾値: {threshold_length:.6f} (中央値の{max_edge_length_ratio}倍)")
# 閾値以上の長さのエッジを特定
edges_to_subdivide = []
for edge in bm.edges:
if edge.calc_length() >= threshold_length:
edges_to_subdivide.append(edge)
print(f"細分化対象エッジ数: {len(edges_to_subdivide)} / {len(bm.edges)}")
if edges_to_subdivide:
bmesh.ops.subdivide_edges(
bm,
edges=edges_to_subdivide,
cuts=cuts,
use_grid_fill=True,
use_single_edge=False,
use_only_quads=False
)
print(f"エッジを{cuts}回細分化しました")
# BMeshの内容をメッシュに反映
bm.to_mesh(mesh)
mesh.update()
bm.free()
except Exception as e:
print(f"細分化中にエラーが発生しました: {e}")
if 'bm' in locals():
bm.free()
# --- 細分化後、Custom Split Normalsを再設定(cKDTree使用) ---
if had_custom_normals and kd is not None:
new_loop_normals = [None] * len(mesh.loops)
for i, loop in enumerate(mesh.loops):
v_index = loop.vertex_index
v_co = mesh.vertices[v_index].co
# cKDTreeで最寄りの頂点を検索(距離, インデックスを返す)
dist, orig_index = kd.query(v_co)
# 保存しておいた元の法線を取得(なければ現状の法線を使用)
new_loop_normals[i] = orig_normals_per_vertex.get(orig_index, mesh.vertices[v_index].normal)
mesh.use_auto_smooth = True
mesh.normals_split_custom_set(new_loop_normals)
mesh.update()
def subdivide_faces(obj, face_indices, cuts=1, max_distance=0.005):
"""
指定された面(face_indices)からワールド座標系で一定距離以内にある面を細分化します。
BVHTreeを使用して高速化を行います。
※Custom Split Normalsがある場合、細分化前に各頂点の平均カスタム法線を保存し、
細分化後に各ループの最寄り元法線を補間して再設定します。
"""
mesh = obj.data
had_custom_normals = mesh.has_custom_normals
if not obj or obj.type != 'MESH':
print("無効なオブジェクトです")
return
if len(obj.data.vertices) == 0:
print("メッシュに頂点がありません")
return
# --- 細分化前にCustom Split Normalsを保存(cKDTree版) ---
orig_normals_per_vertex = {}
kd = None
if had_custom_normals:
# 各ループを1度の走査で、頂点ごとに法線リストを作成
temp_normals = {i: [] for i in range(len(mesh.vertices))}
for loop in mesh.loops:
temp_normals[loop.vertex_index].append(loop.normal)
for v_idx, normals in temp_normals.items():
if normals:
avg = Vector((0.0, 0.0, 0.0))
for n in normals:
avg += n
if avg.length > 1e-8:
avg.normalize()
orig_normals_per_vertex[v_idx] = avg.copy()
# 各頂点の座標をNumPy配列にまとめ、cKDTreeを構築
points = np.array([v.co[:] for v in mesh.vertices])
kd = cKDTree(points)
try:
# --- BMeshを用いた細分化処理 ---
bm = bmesh.new()
bm.from_mesh(mesh)
bm.faces.ensure_lookup_table()
# ワールド座標系に変換
bm.transform(obj.matrix_world)
# BVHTreeを構築
bvh_tree = BVHTree.FromBMesh(bm)
# 初期対象の面を取得
initial_faces = {f for f in bm.faces if f.index in face_indices}
# 距離内の面を検索するための対象面のセット
faces_within_distance = set(initial_faces)
# 各初期対象面からdistance_threshold以内の面を検索
for f in initial_faces:
# 面の中心点を計算
face_center = f.calc_center_median()
# 面の法線とサイズを考慮した検索範囲を設定
# 面の最大エッジ長を計算してサーチ半径に加算
max_edge_length = max([e.calc_length() for e in f.edges])
search_radius = max_edge_length
if max_edge_length > max_distance:
search_radius = max_distance
# BVHTreeで近傍の面を検索
for (location, normal, index, distance) in bvh_tree.find_nearest_range(face_center, search_radius):
if index is not None and index < len(bm.faces):
candidate_face = bm.faces[index]
faces_within_distance.add(candidate_face)
# ワールド座標から元の座標系に戻す
bm.transform(obj.matrix_world.inverted())
# 細分化対象のエッジは、距離内の対象面に属するエッジのみ
all_edges_candidates = {edge for f in faces_within_distance for edge in f.edges}
# エッジの長さが0.004より短いものを除外
min_edge_length = 0.004
edges_to_subdivide = []
for edge in all_edges_candidates:
edge_length = edge.calc_length()
if edge_length >= min_edge_length:
edges_to_subdivide.append(edge)
if edges_to_subdivide:
bmesh.ops.subdivide_edges(
bm,
edges=edges_to_subdivide,
cuts=cuts,
use_grid_fill=True,
use_single_edge=False,
use_only_quads=True
)
# 対象面とその隣接面、さらにその隣接面を一度の走査で取得
faces_to_check = set(faces_within_distance)
# 1次隣接面を取得
first_level_adjacent = set()
for f in faces_within_distance:
for edge in f.edges:
first_level_adjacent.update(edge.link_faces)
faces_to_check.update(first_level_adjacent)
# 2次隣接面を取得
for f in first_level_adjacent:
for edge in f.edges:
faces_to_check.update(edge.link_faces)
# 五角形以上のポリゴンを三角形化
ngons = [f for f in faces_to_check if len(f.verts) > 4]
if ngons:
bmesh.ops.triangulate(
bm,
faces=ngons,
quad_method='BEAUTY',
ngon_method='BEAUTY'
)
# BMeshの内容をメッシュに反映
bm.to_mesh(mesh)
mesh.update()
bm.free()
except Exception as e:
print(f"細分化中にエラーが発生しました: {e}")
# --- 細分化後、Custom Split Normalsを再設定(cKDTree使用) ---
if had_custom_normals and kd is not None:
new_loop_normals = [None] * len(mesh.loops)
for i, loop in enumerate(mesh.loops):
v_index = loop.vertex_index
v_co = mesh.vertices[v_index].co
# cKDTreeで最寄りの頂点を検索(距離, インデックスを返す)
dist, orig_index = kd.query(v_co)
# 保存しておいた元の法線を取得(なければ現状の法線を使用)
new_loop_normals[i] = orig_normals_per_vertex.get(orig_index, mesh.vertices[v_index].normal)
mesh.use_auto_smooth = True
mesh.normals_split_custom_set(new_loop_normals)
mesh.update()
# ① Deformation Field のキャッシュ用グローバル辞書とヘルパー関数
_deformation_field_cache = {}
def get_deformation_field(field_data_path: str) -> dict:
"""
指定されたパスの Deformation Field データを読み込み、KDTree を構築してキャッシュする。
既に読み込まれていればキャッシュから返す。
"""
global _deformation_field_cache
if field_data_path in _deformation_field_cache:
return _deformation_field_cache[field_data_path]
# Deformation Field のデータ読み込み
data = np.load(field_data_path, allow_pickle=True)
field_points = data['field_points']
delta_positions = data['delta_positions']
# weightsが存在しない場合はすべて1のものを使用
if 'weights' in data:
field_weights = data['weights']
else:
field_weights = np.ones(len(field_points))
world_matrix = Matrix(data['world_matrix'])
world_matrix_inv = world_matrix.inverted()
# kdtree_query_kの値を取得(存在しない場合はデフォルト値64を使用)
k_neighbors = 64
if 'kdtree_query_k' in data:
try:
k_value = data['kdtree_query_k']
k_neighbors = int(k_value)
print(f"kdtree_query_k value: {k_neighbors}")
except Exception as e:
print(f"Warning: Could not process kdtree_query_k value: {e}")
# KDTree の構築
kdtree = cKDTree(field_points)
field_info = {
'data': data,
'field_points': field_points,
'delta_positions': delta_positions,
'field_weights': field_weights,
'world_matrix': world_matrix,
'world_matrix_inv': world_matrix_inv,
'kdtree': kdtree,
'kdtree_query_k': k_neighbors,
}
_deformation_field_cache[field_data_path] = field_info
return field_info
def get_deformation_field_multi_step(field_data_path: str) -> dict:
"""
指定されたパスの多段階Deformation Field データを読み込み、KDTree を構築してキャッシュする。
SaveAndApplyFieldAuto.pyのapply_field_data関数と同様の多段階データ処理をサポート。
"""
global _deformation_field_cache
multi_step_key = field_data_path + "_multi_step"
if multi_step_key in _deformation_field_cache:
return _deformation_field_cache[multi_step_key]
# Deformation Field のデータ読み込み
data = np.load(field_data_path, allow_pickle=True)
# データ形式の確認と読み込み
if 'all_field_points' in data:
# 新形式:各ステップの座標が保存されている場合
all_field_points = data['all_field_points']
all_delta_positions = data['all_delta_positions']
num_steps = int(data.get('num_steps', len(all_delta_positions)))
print(f"複数ステップのデータ(新形式)を検出: {num_steps}ステップ")
# ミラー設定を確認(データに含まれていない場合はそのまま使用)
enable_x_mirror = data.get('enable_x_mirror', False)
print(f"X軸ミラー設定: {'有効' if enable_x_mirror else '無効'}")
if enable_x_mirror:
# X軸ミラーリング:X座標が0より大きいデータを負に反転してミラーデータを追加
mirrored_field_points = []
mirrored_delta_positions = []
for step in range(num_steps):
field_points = all_field_points[step].copy()
delta_positions = all_delta_positions[step].copy()
if len(field_points) > 0:
# X座標が0より大きいデータを検索
x_positive_mask = field_points[:, 0] > 0.0
if np.any(x_positive_mask):
# ミラーデータを作成
mirror_field_points = field_points[x_positive_mask].copy()
mirror_delta_positions = delta_positions[x_positive_mask].copy()
# X座標とX成分の変位を反転
mirror_field_points[:, 0] *= -1.0
mirror_delta_positions[:, 0] *= -1.0
# 元のデータとミラーデータを結合
combined_field_points = np.vstack([field_points, mirror_field_points])
combined_delta_positions = np.vstack([delta_positions, mirror_delta_positions])
mirrored_field_points.append(combined_field_points)
mirrored_delta_positions.append(combined_delta_positions)
print(f"ステップ {step+1}: 元の頂点数 {len(field_points)} → ミラー適用後 {len(combined_field_points)}")
else:
mirrored_field_points.append(field_points)
mirrored_delta_positions.append(delta_positions)
print(f"ステップ {step+1}: フィールド頂点数 {len(field_points)} (ミラー対象なし)")
else:
mirrored_field_points.append(field_points)
mirrored_delta_positions.append(delta_positions)
print(f"ステップ {step+1}: フィールド頂点数 0")
# ミラー適用後のデータを使用
all_field_points = mirrored_field_points
all_delta_positions = mirrored_delta_positions
else:
# ミラーが無効の場合、元のデータをそのまま使用
print("X軸ミラーリングが無効のため、元のデータをそのまま使用します")
print("field_data_path: ", field_data_path)
for step in range(num_steps):
print(f"ステップ {step+1}: フィールド頂点数 {len(all_field_points[step])}")
elif 'field_points' in data and 'all_delta_positions' in data:
# 旧形式:単一の座標セットが保存されている場合
field_points = data['field_points']
all_delta_positions = data['all_delta_positions']
num_steps = int(data.get('num_steps', len(all_delta_positions)))
# 旧形式の場合、すべてのステップで同じ座標を使用
all_field_points = [field_points for _ in range(num_steps)]
print(f"複数ステップのデータ(旧形式)を検出: {num_steps}ステップ")
else:
# 後方互換性のため、単一ステップのデータも処理
field_points = data.get('field_points', data.get('delta_positions', []))
delta_positions = data.get('delta_positions', data.get('all_delta_positions', [[]])[0] if 'all_delta_positions' in data else [])
all_field_points = [field_points]
all_delta_positions = [delta_positions]
num_steps = 1
print("単一ステップのデータを検出")
# weightsが存在しない場合はすべて1のものを使用
if 'weights' in data:
field_weights = data['weights']
else:
field_weights = np.ones(len(all_field_points[0]) if len(all_field_points) > 0 else 0)
world_matrix = Matrix(data['world_matrix'])
world_matrix_inv = world_matrix.inverted()
# kdtree_query_kの値を取得(存在しない場合はデフォルト値8を使用)
k_neighbors = 8
# if 'kdtree_query_k' in data:
# try:
# k_value = data['kdtree_query_k']
# k_neighbors = int(k_value)
# print(f"kdtree_query_k value: {k_neighbors}")
# except Exception as e:
# print(f"Warning: Could not process kdtree_query_k value: {e}")
# RBFパラメータの読み込み
rbf_epsilon = float(data.get('rbf_epsilon', 0.00001))
print(f"RBF補間パラメータ: 関数=multi_quadratic_biharmonic, epsilon={rbf_epsilon}")
field_info = {
'data': data,
'all_field_points': all_field_points,
'all_delta_positions': all_delta_positions,
'num_steps': num_steps,
'field_weights': field_weights,
'world_matrix': world_matrix,
'world_matrix_inv': world_matrix_inv,
'kdtree_query_k': k_neighbors,
'rbf_epsilon': rbf_epsilon,
'is_multi_step': num_steps > 1
}
_deformation_field_cache[multi_step_key] = field_info
return field_info
def find_connected_components(mesh_obj):
"""
メッシュオブジェクト内で接続していないコンポーネントを検出する
Parameters:
mesh_obj: 検出対象のメッシュオブジェクト
Returns:
List[Set[int]]: 各コンポーネントに含まれる頂点インデックスのセットのリスト
"""
# BMeshを作成し、元のメッシュからデータをコピー
bm = bmesh.new()
bm.from_mesh(mesh_obj.data)
bm.verts.ensure_lookup_table()
# 頂点インデックスのマッピングを作成(BMesh内のインデックス → 元のメッシュのインデックス)
vert_indices = {v.index: i for i, v in enumerate(bm.verts)}
# 未訪問の頂点を追跡
unvisited = set(vert_indices.keys())
components = []
while unvisited:
# 未訪問の頂点から開始
start_idx = next(iter(unvisited))
# 幅優先探索で連結成分を検出
component = set()
queue = [start_idx]
while queue:
current = queue.pop(0)
if current in unvisited:
unvisited.remove(current)
component.add(vert_indices[current]) # 元のメッシュのインデックスに変換して追加
# 隣接頂点をキューに追加(エッジで接続されている頂点のみ)
for edge in bm.verts[current].link_edges:
other = edge.other_vert(bm.verts[current]).index
if other in unvisited:
queue.append(other)
# 頂点数が1のコンポーネント(孤立頂点)は除外
if len(component) > 1:
components.append(component)
bm.free()
return components
def check_uniform_weights(mesh_obj, component_verts, armature_obj):
"""
指定されたコンポーネント内の頂点が一様なボーンウェイトを持つか確認する
Parameters:
mesh_obj: メッシュオブジェクト
component_verts: コンポーネントに含まれる頂点インデックスのセット
armature_obj: ウェイト確認対象のアーマチュア
Returns:
(bool, dict): 一様なウェイトを持つかどうかのフラグと、ボーン名:ウェイト値の辞書
"""
if not armature_obj:
return False, {}
# アーマチュアの全ボーン名を取得
target_bones = {bone.name for bone in armature_obj.data.bones}
# 最初の頂点のウェイトパターンを取得
first_vert_idx = next(iter(component_verts))
first_weights = {}
for group in mesh_obj.vertex_groups:
if group.name in target_bones:
weight = 0.0
try:
for g in mesh_obj.data.vertices[first_vert_idx].groups:
if g.group == group.index:
weight = g.weight
break
except RuntimeError:
pass
if weight > 0:
first_weights[group.name] = weight
# 他の全頂点が同じウェイトパターンを持つか確認
for vert_idx in component_verts:
if vert_idx == first_vert_idx:
continue
for bone_name, weight in first_weights.items():
group = mesh_obj.vertex_groups.get(bone_name)
if not group:
return False, {}
current_weight = 0.0
try:
for g in mesh_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
current_weight = g.weight
break
except RuntimeError:
pass
# ウェイト値が異なる場合は一様でない
if abs(current_weight - weight) >= 0.001:
return False, {}
# 追加のボーングループがないか確認
for group in mesh_obj.vertex_groups:
if group.name in target_bones and group.name not in first_weights:
weight = 0.0
try:
for g in mesh_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
weight = g.weight
break
except RuntimeError:
pass
if weight > 0:
return False, {}
return True, first_weights
def generate_weight_hash(weights):
"""ウェイト辞書からハッシュ値を生成する(0.001より小さい部分を四捨五入)"""
sorted_items = sorted(weights.items())
# ウェイト値を0.001の精度で四捨五入
hash_str = "_".join([f"{name}:{round(weight, 3):.3f}" for name, weight in sorted_items])
return hash_str
def calculate_obb(vertices_world):
"""
頂点のワールド座標から最適な向きのバウンディングボックスを計算
Parameters:
vertices_world: 頂点のワールド座標のリスト
Returns:
(axes, extents): 主軸方向と、各方向の半分の長さ
"""
if vertices_world is None or len(vertices_world) < 3:
return None, None
# 点群の重心を計算
centroid = np.mean(vertices_world, axis=0)
# 重心を原点に移動
centered = vertices_world - centroid
# 共分散行列を計算
cov = np.cov(centered, rowvar=False)
# 固有ベクトルと固有値を計算
eigenvalues, eigenvectors = np.linalg.eigh(cov)
# 固有ベクトルが主軸となる
axes = eigenvectors
# 各軸方向のextentを計算
extents = np.zeros(3)
for i in range(3):
axis = axes[:, i]
projection = np.dot(centered, axis)
extents[i] = (np.max(projection) - np.min(projection)) / 2.0
return axes, extents
def separate_and_combine_components(mesh_obj, clothing_armature, do_not_separate_names=None, clustering=True, clothing_avatar_data=None):
"""
メッシュオブジェクト内の接続されていないコンポーネントを検出し、
同じボーンウェイトパターンを持つものをグループ化して分離する
Parameters:
mesh_obj: 処理対象のメッシュオブジェクト
clothing_armature: 衣装のアーマチュアオブジェクト
do_not_separate_names: 分離しないオブジェクト名のリスト(オプション)
clustering: クラスタリングを実行するかどうか
clothing_avatar_data: 衣装のアバターデータ(オプション)
Returns:
(List[bpy.types.Object], List[bpy.types.Object]): 分離されたオブジェクトと分離されなかったオブジェクトのリスト
"""
# 分離しないオブジェクト名のリストがNoneの場合は空リストを使用
if do_not_separate_names is None:
do_not_separate_names = []
# 指定されたhumanoidボーンとそのauxiliaryBonesを取得
allowed_bones = set()
if clothing_avatar_data:
# 対象のhumanoidボーン名
target_humanoid_bones = ["Spine", "Chest", "Neck", "LeftBreast", "RightBreast"]
# humanoidBonesからマッピングを作成
humanoid_to_bone = {}
if "humanoidBones" in clothing_avatar_data:
for bone_data in clothing_avatar_data["humanoidBones"]:
humanoid_name = bone_data.get("humanoidBoneName", "")
bone_name = bone_data.get("boneName", "")
if humanoid_name and bone_name:
humanoid_to_bone[humanoid_name] = bone_name
# 対象のhumanoidボーンに対応するボーン名を追加
for humanoid_bone in target_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
allowed_bones.add(humanoid_to_bone[humanoid_bone])
# auxiliaryBonesから関連するボーンを追加
if "auxiliaryBones" in clothing_avatar_data:
for aux_bone_data in clothing_avatar_data["auxiliaryBones"]:
parent_humanoid = aux_bone_data.get("parentHumanoidBoneName", "")
if parent_humanoid in target_humanoid_bones:
bone_name = aux_bone_data.get("boneName", "")
if bone_name:
allowed_bones.add(bone_name)
print(f"Allowed bones for separation: {sorted(allowed_bones)}")
def has_allowed_bone_weights(weights):
"""ウェイトパターンが許可されたボーンのウェイトを含むかチェック"""
if not allowed_bones:
return True # 制限がない場合はすべて許可
for bone_name in weights.keys():
if bone_name in allowed_bones:
return True
return False
# 連結成分を検出
components = find_connected_components(mesh_obj)
if len(components) <= 1:
# 単一の連結成分の場合は分離しない
return [], [mesh_obj]
print(f"Found {len(components)} connected components in {mesh_obj.name}")
# 各コンポーネントのウェイトを確認
component_data = []
weight_hash_do_not_separate = []
for i, component in enumerate(components):
is_uniform, weights = check_uniform_weights(mesh_obj, component, clothing_armature)
if is_uniform and weights:
# 許可されたボーンのウェイトを持つかチェック
# if not has_allowed_bone_weights(weights):
# print(f"Component {i} in {mesh_obj.name} does not have allowed bone weights, skipping separation")
# component_data.append((component, False, {}, "", 0.0))
# continue
# コンポーネント内の頂点のワールド座標を取得
vertices_world = []
for vert_idx in component:
vert_co = mesh_obj.data.vertices[vert_idx].co.copy()
vert_world = mesh_obj.matrix_world @ vert_co
vertices_world.append(np.array([vert_world.x, vert_world.y, vert_world.z]))
vertices_world = np.array(vertices_world)
# OBBを計算
axes, extents = calculate_obb(vertices_world)
# 最長辺の長さを計算
if extents is not None:
max_extent = np.max(extents) * 2.0 # 半分の長さなので2倍
# 一様なウェイトを持つコンポーネント
weight_hash = generate_weight_hash(weights)
# 小さすぎるコンポーネントは除外
if max_extent < 0.0003:
print(f"Component {i} in {mesh_obj.name} is too small (max extent: {max_extent:.4f}), skipping")
component_data.append((component, False, {}, "", max_extent))
else:
# do_not_separate_namesに含まれる名前のパターンを持つコンポーネントは分離しない
should_separate = True
temp_name = f"{mesh_obj.name}_Uniform_{i}"
# オブジェクト名チェック
if should_separate:
for name_pattern in do_not_separate_names:
if name_pattern in temp_name:
should_separate = False
print(f"Component {i} in {mesh_obj.name} name matches do_not_separate pattern: {name_pattern}")
weight_hash_do_not_separate.append(weight_hash)
break
if should_separate:
for hash_val in weight_hash_do_not_separate:
if hash_val == weight_hash:
should_separate = False
print(f"Component {i} in {mesh_obj.name} weight hash matches do_not_separate pattern: {hash_val}")
break
if should_separate:
print(f"Component {i} in {mesh_obj.name} has uniform weights: {weight_hash} (max extent: {max_extent:.4f})")
# 頂点座標も保存
component_data.append((component, True, weights, weight_hash, max_extent, vertices_world))
else:
component_data.append((component, False, {}, "", max_extent))
else:
# OBBの計算に失敗した場合は分離しない
print(f"Component {i} in {mesh_obj.name} OBB calculation failed")
component_data.append((component, False, {}, "", 0.0))
else:
# 一様でないか、ウェイトを持たないコンポーネント
print(f"Component {i} in {mesh_obj.name} does not have uniform weights")
component_data.append((component, False, {}, "", 0.0))
# ウェイトハッシュでグループ化
weight_groups = {}
non_uniform_components = []
for component, is_uniform, weights, weight_hash, max_extent, *extra_data in component_data:
if is_uniform:
if weight_hash not in weight_groups:
weight_groups[weight_hash] = []
vertices_world = extra_data[0] if extra_data else None
weight_groups[weight_hash].append((component, vertices_world))
else:
non_uniform_components.append(component)
# 一様なウェイトを持つコンポーネントを分離
uniform_objects = []
if clustering:
# 各ウェイトハッシュのコンポーネントを空間的な距離に基づいてさらにクラスタリング
for weight_hash, components_with_coords in weight_groups.items():
# コンポーネントの座標とサイズを計算
component_coords = {}
component_sizes = {}
component_indices = {}
for i, (component, vertices_world) in enumerate(components_with_coords):
if vertices_world is not None and len(vertices_world) > 0:
# コンポーネントの中心を計算
center = np.mean(vertices_world, axis=0)
# NumPy配列をVectorに変換
vectors = [Vector(v) for v in vertices_world]
component_coords[i] = vectors
component_sizes[i] = calculate_component_size(vectors)
component_indices[i] = component
# 空間的なクラスタリングを実行
clusters = cluster_components_by_adaptive_distance(component_coords, component_sizes)
print(f"Weight hash {weight_hash} has {len(clusters)} spatial clusters")
# 各クラスターごとに別々のオブジェクトを作成
for cluster_idx, cluster in enumerate(clusters):
# 名前を設定(最初のコンポーネントIDと空間クラスターIDを使用)
first_component_id = -1
for i, (component, is_uniform, weights, hash_val, _, *_) in enumerate(component_data):
if is_uniform and hash_val == weight_hash:
for comp_idx in cluster:
if component == component_indices[comp_idx]:
first_component_id = i
break
if first_component_id >= 0:
break
if first_component_id >= 0:
cluster_name = f"{mesh_obj.name}_Uniform_{first_component_id}_Cluster_{cluster_idx}"
else:
cluster_name = f"{mesh_obj.name}_Uniform_Hash_{len(uniform_objects)}_Cluster_{cluster_idx}"
should_separate = True
for name_pattern in do_not_separate_names:
if name_pattern in cluster_name:
print(f"Component {i} in {cluster_name} name matches do_not_separate pattern: {name_pattern}")
for (component, vertices_world) in components_with_coords:
non_uniform_components.append(component)
should_separate = False
break
if not should_separate:
continue
# アクティブオブジェクトの保存
original_active = bpy.context.view_layer.objects.active
# 元のメッシュを選択
bpy.ops.object.select_all(action='DESELECT')
mesh_obj.select_set(True)
bpy.context.view_layer.objects.active = mesh_obj
# オブジェクトを複製
bpy.ops.object.duplicate(linked=False)
new_obj = bpy.context.active_object
new_obj.name = cluster_name
# クラスター内のコンポーネントの頂点を収集
keep_vertices = set()
for comp_idx in cluster:
keep_vertices.update(component_indices[comp_idx])
# このクラスターに属する頂点以外を削除
# 編集モードに入る
bpy.ops.object.select_all(action='DESELECT')
new_obj.select_set(True)
bpy.context.view_layer.objects.active = new_obj
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_mode(type="VERT")
# 全頂点の選択を解除
bpy.ops.mesh.select_all(action='DESELECT')
# 保持する頂点を選択
bpy.ops.object.mode_set(mode='OBJECT')
for i, vert in enumerate(new_obj.data.vertices):
vert.select = i in keep_vertices
# 選択頂点以外を削除
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='INVERT')
bpy.ops.mesh.delete(type='VERT')
bpy.ops.object.mode_set(mode='OBJECT')
# オブジェクトに元のシェイプキーを保持
if mesh_obj.data.shape_keys:
for key_block in mesh_obj.data.shape_keys.key_blocks:
if key_block.name not in new_obj.data.shape_keys.key_blocks:
shape_key = new_obj.shape_key_add(name=key_block.name)
# シェイプキーの値をコピー
shape_key.value = key_block.value
uniform_objects.append(new_obj)
# 元のアクティブオブジェクトに戻す
bpy.context.view_layer.objects.active = original_active
# 分離されないコンポーネントがある場合は元のメッシュを複製
if non_uniform_components:
# アクティブオブジェクトの保存
original_active = bpy.context.view_layer.objects.active
# 元のメッシュを選択
bpy.ops.object.select_all(action='DESELECT')
mesh_obj.select_set(True)
bpy.context.view_layer.objects.active = mesh_obj
# オブジェクトを複製
bpy.ops.object.duplicate(linked=False)
non_uniform_obj = bpy.context.active_object
non_uniform_obj.name = f"{mesh_obj.name}_NonUniform"
# 分離されないコンポーネントの頂点以外を削除
keep_vertices = set()
for component in non_uniform_components:
keep_vertices.update(component)
# 編集モードに入る
bpy.ops.object.select_all(action='DESELECT')
non_uniform_obj.select_set(True)
bpy.context.view_layer.objects.active = non_uniform_obj
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_mode(type="VERT")
# 全頂点の選択を解除
bpy.ops.mesh.select_all(action='DESELECT')
# 保持する頂点を選択
bpy.ops.object.mode_set(mode='OBJECT')
for i, vert in enumerate(non_uniform_obj.data.vertices):
vert.select = i in keep_vertices
# 選択頂点以外を削除
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='INVERT')
bpy.ops.mesh.delete(type='VERT')
bpy.ops.object.mode_set(mode='OBJECT')
# 元のアクティブオブジェクトに戻す
bpy.context.view_layer.objects.active = original_active
else:
non_uniform_obj = None
# 返却するオブジェクトリストを作成
separated_objects = uniform_objects
non_separated_objects = [non_uniform_obj] if non_uniform_obj else []
# 分離されなかったオブジェクトの頂点数を表示
if non_uniform_obj:
print(f"Non-separated object '{non_uniform_obj.name}' vertex count: {len(non_uniform_obj.data.vertices)}")
else:
print("No non-separated object.")
# 分離された各オブジェクトの頂点数を表示
for sep_obj in uniform_objects:
print(f"Separated object '{sep_obj.name}' vertex count: {len(sep_obj.data.vertices)}")
return separated_objects, non_separated_objects
def calculate_optimal_rigid_transform(source_points, target_points):
"""
2つの点群間の最適な剛体変換(回転と平行移動)を計算する
Parameters:
source_points: 変換元の点群 (Nx3 のNumPy配列)
target_points: 変換先の点群 (Nx3 のNumPy配列)
Returns:
(R, t): 回転行列 (3x3) と平行移動ベクトル (3x1)
"""
# 点群の重心を計算
centroid_source = np.mean(source_points, axis=0)
centroid_target = np.mean(target_points, axis=0)
# 重心を原点に移動
source_centered = source_points - centroid_source
target_centered = target_points - centroid_target
# 共分散行列を計算
H = source_centered.T @ target_centered
# 特異値分解
U, S, Vt = np.linalg.svd(H)
# 回転行列を計算
R = Vt.T @ U.T
# 反射を防ぐ(行列式が負の場合)
if np.linalg.det(R) < 0:
Vt[-1, :] *= -1
R = Vt.T @ U.T
# 平行移動ベクトルを計算
t = centroid_target - R @ centroid_source
return R, t
def apply_rigid_transform_to_points(points, R, t):
"""
点群に剛体変換を適用する
Parameters:
points: 変換する点群 (Nx3 のNumPy配列)
R: 回転行列 (3x3)
t: 平行移動ベクトル (3x1)
Returns:
transformed_points: 変換後の点群 (Nx3 のNumPy配列)
"""
return (R @ points.T).T + t
def calculate_optimal_similarity_transform(source_points, target_points):
"""
2つの点群間の最適な相似変換(スケール、回転、平行移動)を計算する
Parameters:
source_points: 変換元の点群 (Nx3 のNumPy配列)
target_points: 変換先の点群 (Nx3 のNumPy配列)
Returns:
(s, R, t): スケーリング係数 (スカラー), 回転行列 (3x3), 平行移動ベクトル (3x1)
"""
# 点群の重心を計算
centroid_source = np.mean(source_points, axis=0)
centroid_target = np.mean(target_points, axis=0)
# 重心を原点に移動
source_centered = source_points - centroid_source
target_centered = target_points - centroid_target
# ソース点群の二乗和を計算(スケーリング係数の計算用)
source_scale = np.sum(source_centered**2)
# 共分散行列を計算
H = source_centered.T @ target_centered
# 特異値分解
U, S, Vt = np.linalg.svd(H)
# 回転行列を計算
R = Vt.T @ U.T
# 反射を防ぐ(行列式が負の場合)
if np.linalg.det(R) < 0:
Vt[-1, :] *= -1
R = Vt.T @ U.T
# 最適なスケーリング係数を計算
trace_RSH = np.sum(S)
s = trace_RSH / source_scale if source_scale > 0 else 1.0
# 平行移動ベクトルを計算
t = centroid_target - s * (R @ centroid_source)
return s, R, t
def apply_similarity_transform_to_points(points, s, R, t):
"""
点群に相似変換を適用する
Parameters:
points: 変換する点群 (Nx3 のNumPy配列)
s: スケーリング係数 (スカラー)
R: 回転行列 (3x3)
t: 平行移動ベクトル (3x1)
Returns:
transformed_points: 変換後の点群 (Nx3 のNumPy配列)
"""
return s * (R @ points.T).T + t
def calculate_optimal_similarity_transform_weighted(source_points, target_points, weights):
"""
重み付きの2つの点群間の最適な相似変換(スケール、回転、平行移動)を計算する
Parameters:
source_points: 変換元の点群 (Nx3 のNumPy配列)
target_points: 変換先の点群 (Nx3 のNumPy配列)
weights: 各点の重み (Nx1 のNumPy配列)
Returns:
(s, R, t): スケーリング係数 (スカラー), 回転行列 (3x3), 平行移動ベクトル (3x1)
"""
# 重みを正規化
weights = weights / np.sum(weights) if np.sum(weights) > 0 else np.ones_like(weights) / len(weights)
# 重み付き重心を計算
centroid_source = np.sum(source_points * weights[:, np.newaxis], axis=0)
centroid_target = np.sum(target_points * weights[:, np.newaxis], axis=0)
print(f"centroid_source: {centroid_source}")
print(f"centroid_target: {centroid_target}")
print(f"centroid_source_original: {np.mean(source_points, axis=0)}")
print(f"centroid_target_original: {np.mean(target_points, axis=0)}")
# 重心を原点に移動
source_centered = source_points - centroid_source
target_centered = target_points - centroid_target
# 重み付きソース点群の二乗和を計算(スケーリング係数の計算用)
source_scale = np.sum(weights[:, np.newaxis] * source_centered**2)
# 重み付き共分散行列を計算
H = (source_centered * weights[:, np.newaxis]).T @ target_centered
# 特異値分解
U, S, Vt = np.linalg.svd(H)
# 回転行列を計算
R = Vt.T @ U.T
# 反射を防ぐ(行列式が負の場合)
if np.linalg.det(R) < 0:
Vt[-1, :] *= -1
R = Vt.T @ U.T
# 最適なスケーリング係数を計算
trace_RSH = np.sum(S)
s = trace_RSH / source_scale if source_scale > 0 else 1.0
# 平行移動ベクトルを計算
t = centroid_target - s * (R @ centroid_source)
return s, R, t
def get_distance_weight_influence_factors(obj, influence_range=1.0, min_weight_diff_threshold=0.01):
"""
DistanceWeight頂点グループから影響度係数を取得する
Parameters:
obj: 対象メッシュオブジェクト
influence_range: 最大影響度と最小影響度の差(0.0-1.0、デフォルト0.5)
min_weight_diff_threshold: 最大値と最小値の差の最小閾値(これより小さい場合は均等影響度)
Returns:
influence_factors: 各頂点の影響度係数 (Nx1 のNumPy配列)、または None(頂点グループが無い場合)
"""
# DistanceWeight頂点グループを検索
distance_weight_group = None
for group in obj.vertex_groups:
if group.name == "DistanceWeight":
distance_weight_group = group
break
if distance_weight_group is None:
return None
# 各頂点のウェイト値を取得
num_vertices = len(obj.data.vertices)
weights = np.zeros(num_vertices)
for i in range(num_vertices):
try:
weight = 0.0
for g in obj.data.vertices[i].groups:
if g.group == distance_weight_group.index:
weight = g.weight
break
weights[i] = weight
except RuntimeError:
weights[i] = 0.0 # 頂点がグループに属していない場合
# 最大値と最小値を計算
max_weight = np.max(weights)
min_weight = np.min(weights)
weight_range = max_weight - min_weight
# 範囲が閾値より小さい場合は等しい影響度を返す
if weight_range < min_weight_diff_threshold:
print(f"DistanceWeight range ({weight_range:.4f}) is below threshold ({min_weight_diff_threshold:.4f}), using uniform influence")
return np.ones(num_vertices)
# 影響度を計算
# 最大ウェイトの頂点は影響度1.0、最小ウェイトの頂点は影響度(1.0 - influence_range)
min_influence = 1.0 - influence_range
max_influence = 1.0
# ウェイト値を0-1に正規化し、影響度範囲にマッピング
normalized_weights = (weights - min_weight) / weight_range
influence_factors = min_influence + normalized_weights * influence_range
print(f"DistanceWeight influence: min={min_influence:.3f}, max={max_influence:.3f}, range={influence_range:.3f}")
print(f"DistanceWeight: min={min_weight:.3f}, max={max_weight:.3f}, range={weight_range:.3f}")
return influence_factors
def join_objects(objects, target_name=None):
"""
複数のオブジェクトを結合する
Parameters:
objects: 結合するオブジェクトのリスト
target_name: 結合後のオブジェクト名(オプション)
Returns:
bpy.types.Object: 結合されたオブジェクト(成功時)または None(失敗時)
"""
if not objects:
return None
if len(objects) == 1:
if target_name and objects[0].name != target_name:
objects[0].name = target_name
return objects[0]
# アクティブオブジェクトを保存
original_active = bpy.context.view_layer.objects.active
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
# 結合するオブジェクトを選択
for obj in objects:
obj.select_set(True)
# 最初のオブジェクトをアクティブに設定
bpy.context.view_layer.objects.active = objects[0]
# オブジェクトを結合
bpy.ops.object.join()
# 結合されたオブジェクトを取得
joined_obj = bpy.context.view_layer.objects.active
# 名前を設定
if target_name:
joined_obj.name = target_name
# 元のアクティブオブジェクトを復元
bpy.context.view_layer.objects.active = original_active
return joined_obj
def batch_process_vertices_simple(vertices, kdtree, field_points, delta_positions, field_weights,
field_matrix, field_matrix_inv, target_matrix, target_matrix_inv, batch_size=1000):
"""
頂点をバッチで処理
"""
num_vertices = len(vertices)
results = np.zeros((num_vertices, 3))
for start_idx in range(0, num_vertices, batch_size):
end_idx = min(start_idx + batch_size, num_vertices)
batch_vertices = vertices[start_idx:end_idx]
# バッチ内の全頂点をフィールド空間に変換
batch_world = np.array([target_matrix @ Vector(v) for v in batch_vertices])
batch_field = np.array([field_matrix_inv @ Vector(v) for v in batch_world])
# 最近接点の検索(バッチ処理)
distances, indices = kdtree.query(batch_field, k=64)
# 各頂点の変位を計算
for i, (vert_field, dist, idx) in enumerate(zip(batch_field, distances, indices)):
x_sign = 1 if vert_field[0] >= 0 else -1
# 重み付き変位を計算
weights = 1.0 / (dist + 0.0001)
if weights.sum() > 0.0:
weights /= weights.sum()
else:
weights *= 0
deltas = delta_positions[idx]
displacement = (deltas * weights[:, np.newaxis]).sum(axis=0)
# ワールド空間での変位を計算
world_displacement = field_matrix.to_3x3() @ Vector(displacement)
results[start_idx + i] = batch_world[i] + world_displacement
return results
def process_field_deformation_simple(target_obj, field_data_path, blend_shape_labels=None, clothing_avatar_data=None, shape_key_name="SymmetricDeformed", ignore_blendshape=None, target_shape_key=None, base_shape_key=None):
# 評価済みメッシュから頂点位置(元の状態)を取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# シェイプキーの準備
if target_obj.data.shape_keys is None:
target_obj.shape_key_add(name='Basis')
shape_key = target_obj.shape_key_add(name=shape_key_name)
shape_key.value = 1.0
data = np.load(field_data_path, allow_pickle=True)
field_points = data['field_points']
delta_positions = data['delta_positions']
field_weights = data['weights']
field_matrix = Matrix(data['world_matrix'])
field_matrix_inv = field_matrix.inverted()
kdtree = cKDTree(field_points)
original_positions = np.array([v.co for v in eval_mesh.vertices])
chosen_positions = batch_process_vertices_simple(
original_positions,
kdtree,
field_points,
delta_positions,
field_weights,
field_matrix,
field_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
batch_size=1000
)
blendshape_ignored = True
armature_obj = get_armature_from_modifier(target_obj)
if not armature_obj:
raise ValueError("Armatureモディファイアが見つかりません")
matrix_armature_inv_fallback = Matrix.Identity(4)
for i, world_pos in enumerate(chosen_positions):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(world_pos)
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
shape_key.data[i].co = local_pos
matrix_armature_inv_fallback = matrix_armature_inv
return shape_key, blendshape_ignored
def apply_blendshape_deformation_fields(target_obj, field_data_path, blend_shape_labels=None, clothing_avatar_data=None, blend_shape_values=None):
"""
BlendShape用 Deformation Field を適用し、結果を_BaseShapeがついた名前のシェイプキーとして保存する
Parameters:
target_obj: 対象メッシュオブジェクト
field_data_path: Deformation Fieldのパス
blend_shape_labels: 適用するブレンドシェイプのラベルリスト
clothing_avatar_data: 衣装アバターデータ
blend_shape_values: ブレンドシェイプの値のリスト
"""
if not blend_shape_labels or not clothing_avatar_data:
return
# ブレンドシェイプ値の辞書を作成
blend_shape_value_dict = {}
if blend_shape_values:
for i, label in enumerate(blend_shape_labels):
if i < len(blend_shape_values):
blend_shape_value_dict[label] = blend_shape_values[i]
else:
blend_shape_value_dict[label] = 1.0 # 不足している場合は1.0
else:
# blend_shape_valuesがNoneの場合はすべて1.0
for label in blend_shape_labels:
blend_shape_value_dict[label] = 1.0
print(f"ブレンドシェイプ値: {blend_shape_value_dict}")
# オリジナルの頂点位置を取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
original_positions = np.array([v.co for v in eval_mesh.vertices])
armature_obj = get_armature_from_modifier(target_obj)
label_to_filepath = {}
label_to_mask_bones = {}
for field in clothing_avatar_data.get("invertedBlendShapeFields", []):
label_to_filepath[field["label"]] = field["filePath"]
if "maskBones" in field:
label_to_mask_bones[field["label"]] = field["maskBones"]
label_to_filepath_normal = {}
label_to_mask_bones_normal = {}
for field in clothing_avatar_data.get("blendShapeFields", []):
label_to_filepath_normal[field["label"]] = field["filePath"]
if "maskBones" in field:
label_to_mask_bones_normal[field["label"]] = field["maskBones"]
# 各ブレンドシェイプラベルに対して処理
for label in blend_shape_labels:
if label in label_to_filepath and (target_obj.data.shape_keys is None or label not in target_obj.data.shape_keys.key_blocks):
blend_field_path = os.path.join(os.path.dirname(field_data_path), label_to_filepath[label])
if os.path.exists(blend_field_path):
start_value = 1.0 - blend_shape_value_dict[label]
if start_value < 0.00001:
start_value = 0.0
end_value = 1.0 # 終了値は常に1.0
print(f"Applying inverted blend shape field for {label}: {start_value} -> {end_value}")
field_info_blend = get_deformation_field_multi_step(blend_field_path)
blend_points = field_info_blend['all_field_points']
blend_deltas = field_info_blend['all_delta_positions']
blend_field_weights = field_info_blend['field_weights']
blend_matrix = field_info_blend['world_matrix']
blend_matrix_inv = field_info_blend['world_matrix_inv']
blend_k_neighbors = field_info_blend['kdtree_query_k']
mask_weights = None
if label in label_to_mask_bones:
mask_weights = create_blendshape_mask(target_obj, label_to_mask_bones[label], clothing_avatar_data, field_name=label, store_debug_mask=True)
# mask_weightsがすべて0である場合は処理をスキップ
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
# 新しいカスタムレンジ処理を使用
world_positions = batch_process_vertices_with_custom_range(
original_positions,
blend_points,
blend_deltas,
blend_field_weights,
blend_matrix,
blend_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
start_value,
end_value,
deform_weights=mask_weights,
batch_size=1000,
k=blend_k_neighbors
)
# シェイプキーとして保存
shape_key_name = f"{label}_BaseShape"
if target_obj.data.shape_keys is None:
target_obj.shape_key_add(name="Basis", from_mix=False)
shape_key = target_obj.shape_key_add(name=shape_key_name, from_mix=False)
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(world_positions)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(world_positions[i])
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
shape_key.data[i].co = local_pos
matrix_armature_inv_fallback = matrix_armature_inv
# 打ち消すシェイプキーを作成
inv_shape_key = target_obj.shape_key_add(name=f"{label}_temp", from_mix=False)
# 生成されたシェイプキーを打ち消す変位を計算して設定
basis_key = target_obj.data.shape_keys.key_blocks["Basis"]
if start_value < 0.00001:
for i in range(len(world_positions)):
# BaseShapeの変位を計算(現在の位置 - オリジナル位置)
base_displacement = Vector(shape_key.data[i].co) - Vector(basis_key.data[i].co)
# 打ち消すための変位(逆方向)を設定
inv_shape_key.data[i].co = Vector(basis_key.data[i].co) - base_displacement
else:
blend_field_path = os.path.join(os.path.dirname(field_data_path), label_to_filepath_normal[label])
if os.path.exists(blend_field_path):
start_value = blend_shape_value_dict[label]
end_value = 1.0 # 終了値は常に1.0
print(f"Applying blend shape field for {label}: {start_value} -> {end_value}")
field_info_blend = get_deformation_field_multi_step(blend_field_path)
blend_points = field_info_blend['all_field_points']
blend_deltas = field_info_blend['all_delta_positions']
blend_field_weights = field_info_blend['field_weights']
blend_matrix = field_info_blend['world_matrix']
blend_matrix_inv = field_info_blend['world_matrix_inv']
blend_k_neighbors = field_info_blend['kdtree_query_k']
mask_weights = None
if label in label_to_mask_bones_normal:
mask_weights = create_blendshape_mask(target_obj, label_to_mask_bones_normal[label], clothing_avatar_data, field_name=label, store_debug_mask=True)
# mask_weightsがすべて0である場合は処理をスキップ
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
# 新しいカスタムレンジ処理を使用
world_positions = batch_process_vertices_with_custom_range(
original_positions,
blend_points,
blend_deltas,
blend_field_weights,
blend_matrix,
blend_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
start_value,
end_value,
deform_weights=mask_weights,
batch_size=1000,
k=blend_k_neighbors
)
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(world_positions)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(world_positions[i])
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
base_displacement = Vector(shape_key.data[i].co) - Vector(basis_key.data[i].co)
inv_shape_key.data[i].co = local_pos - base_displacement
matrix_armature_inv_fallback = matrix_armature_inv
print(f"Created shape key: {shape_key_name}")
print(f"Created inverse shape key: {label}_temp")
else:
print(f"Warning: Field file not found for blend shape {label}")
else:
print(f"Warning: No field data found for blend shape {label}")
def process_field_deformation(target_obj, field_data_path, blend_shape_labels=None, clothing_avatar_data=None, shape_key_name="SymmetricDeformed", ignore_blendshape=None, target_shape_key=None, base_shape_key=None):
# ① 評価済みメッシュから頂点位置(元の状態)を取得
if target_shape_key is not None:
# すべてのシェイプキーの値を0に設定
for sk in target_obj.data.shape_keys.key_blocks:
sk.value = 0.0
# 対象のシェイプキーの値を1に設定
target_shape_key.value = 1.0
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj_original = target_obj.evaluated_get(depsgraph)
eval_mesh_original = eval_obj_original.data
original_positions = np.array([v.co for v in eval_mesh_original.vertices])
used_shape_keys = []
if ignore_blendshape is None or ignore_blendshape is False:
if blend_shape_labels and clothing_avatar_data:
# 事前に作成されたシェイプキーから頂点位置を取得
for label in blend_shape_labels:
# ignore_blendshapeがNoneの場合は自動判別。衣装モデルに同名のシェイプキーがある場合は適用しない
if ignore_blendshape is None and target_obj.data.shape_keys and label in target_obj.data.shape_keys.key_blocks:
print(f"Skipping {label} - already has shape key")
continue
target_avatar_base_shape_key_name = f"{label}_BaseShape"
if target_obj.data.shape_keys and target_avatar_base_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_avatar_base_shape_key = target_obj.data.shape_keys.key_blocks[target_avatar_base_shape_key_name]
target_avatar_base_shape_key.value = 1.0
print(f"Using shape key {target_avatar_base_shape_key_name} for BlendShape deformation")
used_shape_keys.append(target_avatar_base_shape_key_name)
else:
print(f"Warning: Shape key {target_avatar_base_shape_key_name} not found")
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# blend_positions には BlendShape適用後の頂点位置が入る
blend_positions = np.array([v.co for v in eval_mesh.vertices])
# ③ メインの Deformation Field 情報を取得
field_info = get_deformation_field_multi_step(field_data_path)
field_points = field_info['all_field_points']
delta_positions = field_info['all_delta_positions']
field_weights = field_info['field_weights']
field_matrix = field_info['world_matrix']
field_matrix_inv = field_info['world_matrix_inv']
k_neighbors = field_info['kdtree_query_k']
final_positions = batch_process_vertices_multi_step(
blend_positions,
field_points,
delta_positions,
field_weights,
field_matrix,
field_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
None,
batch_size=1000,
k=k_neighbors
)
for label in used_shape_keys:
target_obj.data.shape_keys.key_blocks[label].value = 0.0
armature_obj = get_armature_from_modifier(target_obj)
if not armature_obj:
raise ValueError("Armatureモディファイアが見つかりません")
# ⑩ シェイプキーの保存または差分計算
if target_shape_key is not None and base_shape_key is not None:
# 差分計算モード: base_shape_keyからの差分を計算してtarget_shape_keyに保存
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(original_positions)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(final_positions[i])
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
# base_shape_keyからの差分を計算
base_pos = base_shape_key.data[i].co
delta = local_pos - base_pos
# 差分をtarget_shape_keyに保存
target_shape_key.data[i].co = target_obj.data.vertices[i].co + delta
matrix_armature_inv_fallback = matrix_armature_inv
return target_shape_key
else:
# 通常モード: 新しいシェイプキーを作成して保存
matrix_armature_inv_fallback = Matrix.Identity(4)
if target_obj.data.shape_keys is None:
target_obj.shape_key_add(name='Basis')
shape_key_a = target_obj.shape_key_add(name=shape_key_name)
shape_key_a.value = 1.0
for i in range(len(original_positions)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(final_positions[i])
local_pos = target_obj.matrix_world.inverted() @ undeformed_world_pos
shape_key_a.data[i].co = local_pos
matrix_armature_inv_fallback = matrix_armature_inv
return shape_key_a
class TransitionCache:
"""Transitionの実行結果をキャッシュするクラス"""
def __init__(self):
self.cache = {} # {blend_shape_combination_hash: {vertices: np.array, blendshape_values: dict}}
def get_cache_key(self, blendshape_values):
"""BlendShapeの値からキャッシュキーを生成"""
sorted_items = sorted(blendshape_values.items())
return hash(tuple(sorted_items))
def store_result(self, blendshape_values, vertices, all_blendshape_values):
"""実行結果をキャッシュに保存"""
cache_key = self.get_cache_key(blendshape_values)
# 同じキーが既に存在する場合は上書きしない
if cache_key in self.cache:
print(f"Cache key already exists, keeping existing entry: {cache_key}")
return
self.cache[cache_key] = {
'vertices': vertices.copy(),
'blendshape_values': all_blendshape_values.copy()
}
print(f"Stored new cache entry: {cache_key}")
def find_interpolation_candidates(self, target_blendshape_values, changing_blendshape, blendshape_groups=None):
"""線形補間候補を検索"""
candidates = []
# changing_blendshapeが属するBlendShapeGroupを特定
changing_blendshape_group = None
group_blendshapes = set()
if blendshape_groups:
for group in blendshape_groups:
blendshapes_in_group = group.get('blendShapeFields', [])
if changing_blendshape in blendshapes_in_group:
changing_blendshape_group = group
group_blendshapes = set(blendshapes_in_group)
break
print(f"target_blendshape_values: {target_blendshape_values}")
for cache_key, cached_data in self.cache.items():
cached_values = cached_data['blendshape_values']
print(f"cached_values: {cached_values}")
values_match = True
# BlendShapeGroupに属する場合:同じグループの他のBlendShapeの値が同じかチェック
if changing_blendshape_group:
for name in group_blendshapes:
if name != changing_blendshape and abs(cached_values.get(name, 0.0) - target_blendshape_values.get(name, 0.0)) > 1e-6:
values_match = False
break
if values_match:
cached_changing_value = cached_values.get(changing_blendshape, 0.0)
target_changing_value = target_blendshape_values.get(changing_blendshape, 0.0)
print(f"cached_changing_value: {cached_changing_value}, target_changing_value: {target_changing_value}")
candidates.append({
'cached_value': cached_changing_value,
'target_value': target_changing_value,
'vertices': cached_data['vertices'],
'distance': abs(cached_changing_value - target_changing_value)
})
return candidates
def interpolate_result(self, target_blendshape_values, changing_blendshape, blendshape_groups=None):
"""線形補間で結果を計算"""
candidates = self.find_interpolation_candidates(target_blendshape_values, changing_blendshape, blendshape_groups)
if len(candidates) < 2:
return None
target_value = target_blendshape_values.get(changing_blendshape, 0.0)
# ターゲット値を挟む候補ペアを全て見つける
valid_pairs = []
for i in range(len(candidates)):
for j in range(i + 1, len(candidates)):
val1, val2 = candidates[i]['cached_value'], candidates[j]['cached_value']
if (val1 <= target_value <= val2) or (val2 <= target_value <= val1):
if abs(val2 - val1) < 1e-6:
continue # 同じ値の場合はスキップ
interval_size = abs(val2 - val1)
valid_pairs.append({
'interval_size': interval_size,
'candidate1': candidates[i],
'candidate2': candidates[j],
'val1': val1,
'val2': val2
})
if not valid_pairs:
return None
# 区間が最も小さいペアを選択
best_pair = min(valid_pairs, key=lambda x: x['interval_size'])
# 線形補間を実行
val1, val2 = best_pair['val1'], best_pair['val2']
t = (target_value - val1) / (val2 - val1)
vertices1 = best_pair['candidate1']['vertices']
vertices2 = best_pair['candidate2']['vertices']
interpolated_vertices = vertices1 + t * (vertices2 - vertices1)
print(f"Using linear interpolation with interval size {best_pair['interval_size']:.6f} for {changing_blendshape}")
return interpolated_vertices
def execute_transitions_with_cache(deferred_transitions, transition_cache, target_obj, rigid_transformation=False):
"""遅延されたTransitionをキャッシュシステムを使って実行"""
print(f"Executing {len(deferred_transitions)} deferred transitions with caching...")
# BlendShapeGroupsの情報を取得(最初のtransition_dataから)
blendshape_groups = None
if deferred_transitions:
base_avatar_data = deferred_transitions[0].get('base_avatar_data')
if base_avatar_data:
blendshape_groups = base_avatar_data.get('blendShapeGroups', [])
# labelに対応する初期シェイプキーの名前shape_key_nameを取得
# 初期状態をキャッシュに保存
target_shape_key_label_to_name = {}
for transition_data in deferred_transitions:
config_data = transition_data['config_data']
shape_key_name = transition_data['target_shape_key_name']
shape_key_label = transition_data['target_label']
target_shape_key_label_to_name[shape_key_label] = shape_key_name
target_shape_key = None
if target_obj.data.shape_keys and shape_key_name and shape_key_name in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(shape_key_name)
if target_shape_key is None:
print(f"Target shape key {shape_key_name} / {target_shape_key.name} not found")
continue
print(f"Target shape key {shape_key_name} / {target_shape_key.name} found")
# 現在のtarget_shape_keyの位置を取得
initial_vertices = np.array([v.co for v in target_shape_key.data])
initial_settings = []
if shape_key_label == 'Basis':
initial_settings = config_data.get('targetBlendShapeSettings', [])
else:
blend_fields = config_data.get('blendShapeFields', [])
for blend_field in blend_fields:
if blend_field['label'] == shape_key_label:
initial_settings = blend_field.get('targetBlendShapeSettings', [])
break
if not initial_settings:
initial_settings = config_data.get('targetBlendShapeSettings', [])
initial_blendshape_values = {}
for setting in initial_settings:
blend_shape_name = setting.get('name', '')
blend_shape_value = setting.get('value', 0.0)
if blend_shape_name:
initial_blendshape_values[blend_shape_name] = blend_shape_value
# 初期状態をキャッシュに保存
transition_cache.store_result(initial_blendshape_values, initial_vertices, initial_blendshape_values)
print(f"Cached initial state for {shape_key_label} with {len(initial_blendshape_values)} BlendShape values")
initial_vertices_dict = {}
# 各transition_dataのoperationsとtransition_setを事前に収集
transition_operations = []
for transition_data in deferred_transitions:
config_data = transition_data['config_data']
target_label = transition_data['target_label']
target_shape_key_name = transition_data['target_shape_key_name']
clothing_avatar_data = transition_data['clothing_avatar_data']
# Transitionの詳細を取得
transition_sets = config_data.get('blend_shape_transition_sets', [])
target_transition_set = None
target_shape_key_name = None
for transition_set in transition_sets:
source_label = transition_set.get('source_label', '')
target_shape_key_name = target_shape_key_label_to_name.get(source_label, '')
print(f"source_label: {source_label}, target_shape_key_name: {target_shape_key_name}")
if transition_set.get('label', '') == target_label and target_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_transition_set = transition_set
print(f"Found transition set for {target_label} with source label {source_label}")
break
if not target_transition_set:
# デフォルトのTransitionを試行
default_transition_sets = config_data.get('blend_shape_default_transition_sets', [])
for default_transition_set in default_transition_sets:
source_label = default_transition_set.get('source_label', '')
target_shape_key_name = target_shape_key_label_to_name.get(source_label, '')
print(f"source_label: {source_label}, target_shape_key_name: {target_shape_key_name}")
if default_transition_set.get('label', '') == target_label and target_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_transition_set = default_transition_set
print(f"Found default transition set for {target_label} with source label {source_label}")
break
if not target_transition_set:
print(f"No transition set found for {target_label}")
continue
# transition_setのcurrent_settingsから初期BlendShape値を取得
current_settings = target_transition_set.get('current_settings', [])
source_label = target_transition_set['source_label']
initial_blendshape_values = {}
# current_settingsからBlendShapeの値を設定
for setting in current_settings:
blend_shape_name = setting.get('name', '')
blend_shape_value = setting.get('value', 0.0)
if blend_shape_name:
initial_blendshape_values[blend_shape_name] = blend_shape_value
# 選択された最もTransition後の状態に近いShapeKeyを取得
target_shape_key_name = target_shape_key_label_to_name.get(source_label, None)
target_shape_key = None
if target_obj.data.shape_keys and target_shape_key_name and target_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if target_shape_key is None:
print(f"Target shape key {target_shape_key_name} not found")
continue
else:
print(f"Target shape key {target_shape_key_name} / {target_shape_key.name} found")
# 現在のtarget_shape_keyの位置を取得
initial_vertices = np.array([v.co for v in target_shape_key.data])
print(f"target_transition_set: {target_transition_set}")
initial_vertices_dict[transition_data['target_shape_key_name']] = initial_vertices.copy()
for transition in target_transition_set.get('transitions', []):
operations = transition.get('operations', [])
if not operations:
print(f"No operations found for {target_label}")
continue
print(f"number of operations: {len(operations)}")
print(f"operations: {operations}")
# operationsとtransition_dataをセットで保存(初期BlendShape値も含める)
transition_operations.append({
'operations': operations,
'transition_data': transition_data,
'current_blendshape_values': initial_blendshape_values.copy(),
'initial_vertices': initial_vertices.copy(),
'current_vertices': initial_vertices.copy(),
'mask_bones': target_transition_set.get("maskBones", [])
})
if target_transition_set.get('transitions', []) is None or len(target_transition_set.get('transitions', [])) == 0:
print(f"No transitions found for {target_label}")
# 空のoperationsとtransition_dataを保存(初期BlendShape値も含める)
transition_operations.append({
'operations': [],
'transition_data': transition_data,
'current_blendshape_values': initial_blendshape_values.copy(),
'initial_vertices': initial_vertices.copy(),
'current_vertices': initial_vertices.copy(),
'mask_bones': target_transition_set.get("maskBones", [])
})
# 最大operation数を取得
max_operations = 0
for item in transition_operations:
max_operations = max(max_operations, len(item['operations']))
# operation順序別に実行
for operation_index in range(max_operations):
print(f"Executing operation index {operation_index + 1}")
for item in transition_operations:
operations = item['operations']
transition_data = item['transition_data']
target_label = transition_data['target_label']
target_shape_key_name = transition_data['target_shape_key_name']
clothing_avatar_data = transition_data['clothing_avatar_data']
base_avatar_data = transition_data['base_avatar_data']
current_blendshape_values = item['current_blendshape_values']
print(f"target_label: {target_label}")
# 現在のoperation_indexが存在するかチェック
if operation_index >= len(operations):
continue
operation = operations[operation_index]
changing_shape_key = operation.get('blend_shape', '')
if not changing_shape_key:
print(f"Warning: No target blend_shape found in operation for {operation_index}")
continue
# operationのto_valueを取得してBlendShape値を更新
target_blendshape_values = current_blendshape_values.copy()
if 'to_value' in operation:
target_blendshape_values[changing_shape_key] = operation['to_value']
else:
print(f"Warning: No to_value found in operation for {changing_shape_key}")
continue
# ターゲットBlendShapeを取得
target_shape_key = None
if target_obj.data.shape_keys and target_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if target_shape_key is None:
print(f"Target shape key {target_shape_key_name} not found")
continue
operation_label = operation['blend_shape']
blendshape_fields = base_avatar_data.get('blendShapeFields', [])
mask_weights = None
for blend_field in blendshape_fields:
if blend_field['label'] == operation_label:
mask_bones = blend_field.get("maskBones", [])
if mask_bones:
print(f"mask_bones is found for {operation_label} : {mask_bones}")
mask_weights = create_blendshape_mask(target_obj, mask_bones, clothing_avatar_data, field_name=operation_label, store_debug_mask=False)
break
# mask_weightsがNoneの場合はmask_weightsを1.0にする
if mask_weights is None:
print(f"mask_weights is None for {operation_label}")
mask_weights = [1.0] * len(target_obj.data.vertices)
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping operation for {operation_label} - all mask weights are zero")
item['current_blendshape_values'] = target_blendshape_values.copy()
continue
# キャッシュから線形補間で結果を取得を試行
interpolated_vertices = transition_cache.interpolate_result(target_blendshape_values, changing_shape_key, blendshape_groups)
if interpolated_vertices is not None:
print(f"Using cached interpolation for {changing_shape_key} = {target_blendshape_values[changing_shape_key]} (label: {transition_data['target_label']})")
#現在のcurrent_verticesの位置を更新
current_vertices = item['current_vertices']
for i in range(len(target_obj.data.vertices)):
current_vertices[i] = (1.0 - mask_weights[i]) * current_vertices[i] + mask_weights[i] * interpolated_vertices[i]
item['current_blendshape_values'] = target_blendshape_values.copy()
continue
# キャッシュにない場合は実際にoperationを実行してキャッシュに保存
print(f"Executing and caching operation for {changing_shape_key} = {target_blendshape_values[changing_shape_key]} (label: {transition_data['target_label']})")
current_vertices = item['current_vertices']
# current_verticesから一時的なシェイプキーを作成
temp_shape_key_name = f"{changing_shape_key}_transition_operation"
temp_shape_key = None
if temp_shape_key_name in target_obj.data.shape_keys.key_blocks:
temp_shape_key = target_obj.data.shape_keys.key_blocks[temp_shape_key_name]
else:
temp_shape_key = target_obj.shape_key_add(name=temp_shape_key_name)
for i in range(len(target_obj.data.vertices)):
temp_shape_key.data[i].co = current_vertices[i]
# BlendShapeSettingsを適用
# この時点でtemp_shape_key.dataの位置は変更されている
apply_blendshape_operation_with_shape_key_name(target_obj, operation, temp_shape_key_name, rigid_transformation)
# 現在のcurrent_verticesの位置を更新
for i in range(len(target_obj.data.vertices)):
current_vertices[i] = (1.0 - mask_weights[i]) * current_vertices[i] + mask_weights[i] * temp_shape_key.data[i].co
# 一時的なシェイプキーを削除
target_obj.shape_key_remove(temp_shape_key)
# 結果をキャッシュに保存
transition_cache.store_result(target_blendshape_values, current_vertices, target_blendshape_values)
item['current_blendshape_values'] = target_blendshape_values.copy()
print(f"Updated BlendShape values for {transition_data['target_label']}: {changing_shape_key} = {target_blendshape_values[changing_shape_key]}")
for target_shape_key_name, initial_vertices in initial_vertices_dict.items():
target_shape_key = None
if target_obj.data.shape_keys and target_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if target_shape_key is None:
print(f"Initialize: Target shape key {target_shape_key_name} / {target_shape_key.name} not found")
continue
else:
print(f"Initialize: Target shape key {target_shape_key_name} / {target_shape_key.name} found")
for i in range(len(target_obj.data.vertices)):
target_shape_key.data[i].co = initial_vertices[i]
#transition_operationsの最後のcurrent_verticesを取得し、それをtarget_labelと同じ名前のシェイプキーに適用する
#その際にtransition_setのmask_weightsを適用する
used_shape_key_names = set()
created_shape_key_names = []
created_shape_key_mask_weights = {}
for item in transition_operations:
target_shape_key_name = item['transition_data']['target_shape_key_name']
clothing_avatar_data = item['transition_data']['clothing_avatar_data']
target_label = item['transition_data']['target_label']
# target_labelがBasisの場合はtarget_shape_key_nameを使用、それ以外はtarget_labelを使用
shape_key_to_use = target_shape_key_name if target_label == 'Basis' else target_label
shape_key_created = False
# シェイプキーを取得または作成
# target_obj.data.shape_keysが存在し、shape_key_to_useがtarget_obj.data.shape_keys.key_blocksに存在する場合はtarget_shape_keyを取得
# ただし、末尾に{shape_key_to_use}_generatedがある場合はそちらを優先する
target_shape_key = None
generated_shape_key_name = f"{shape_key_to_use}_generated"
if target_obj.data.shape_keys and generated_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(generated_shape_key_name)
print(f"Generated target shape key {generated_shape_key_name} found")
elif target_obj.data.shape_keys and shape_key_to_use in target_obj.data.shape_keys.key_blocks:
target_shape_key = target_obj.data.shape_keys.key_blocks.get(shape_key_to_use)
print(f"Target shape key {shape_key_to_use} found")
else:
# シェイプキーが存在しない場合は新規作成(Basisは作成しない)
if target_label != 'Basis':
if not target_obj.data.shape_keys:
# Basisシェイプキーがない場合は作成
target_obj.shape_key_add(name='Basis', from_mix=False)
target_shape_key = target_obj.shape_key_add(name=shape_key_to_use, from_mix=False)
print(f"Created new shape key: {shape_key_to_use}")
created_shape_key_names.append(shape_key_to_use)
shape_key_created = True
else:
print(f"Warning: Basis shape key {shape_key_to_use} not found")
if target_shape_key is None:
print(f"Failed to get or create shape key: {shape_key_to_use}")
continue
used_shape_key_names.add(target_shape_key.name)
initial_vertices = item['initial_vertices']
current_vertices = item['current_vertices']
mask_bones = item['mask_bones']
mask_weights = None
if mask_bones:
mask_weights = create_blendshape_mask(target_obj, mask_bones, clothing_avatar_data, field_name=target_label, store_debug_mask=False)
if mask_weights is None:
mask_weights = [1.0] * len(target_obj.data.vertices)
if shape_key_created:
created_shape_key_mask_weights[target_shape_key.name] = mask_weights
for i in range(len(target_obj.data.vertices)):
target_shape_key.data[i].co = mask_weights[i] * (current_vertices[i] - initial_vertices[i]) + target_shape_key.data[i].co
print("Finished executing deferred transitions")
print(f"Created shape keys: {created_shape_key_names}")
return transition_operations, created_shape_key_mask_weights, used_shape_key_names
def subdivide_breast_faces(target_obj, clothing_avatar_data):
# subdivisionがTrueの場合、胸のボーンに関連する面を事前に細分化
if clothing_avatar_data:
breast_related_faces = set()
# LeftBreastとRightBreastのボーン名を取得
breast_bone_names = []
for bone_mapping in clothing_avatar_data.get("humanoidBones", []):
if bone_mapping["humanoidBoneName"] in ["LeftBreast", "RightBreast"]:
breast_bone_names.append(bone_mapping["boneName"])
# 補助ボーンも取得
for aux_bone_group in clothing_avatar_data.get("auxiliaryBones", []):
if aux_bone_group["humanoidBoneName"] in ["LeftBreast", "RightBreast"]:
breast_bone_names.extend(aux_bone_group["auxiliaryBones"])
# 胸のボーンに関連する頂点を特定
breast_vertices = set()
for bone_name in breast_bone_names:
if bone_name in target_obj.vertex_groups:
vertex_group = target_obj.vertex_groups[bone_name]
for vertex in target_obj.data.vertices:
for group in vertex.groups:
if group.group == vertex_group.index and group.weight > 0.001:
breast_vertices.add(vertex.index)
# 胸の頂点を含む面を特定
if breast_vertices:
for face in target_obj.data.polygons:
if any(vertex_idx in breast_vertices for vertex_idx in face.vertices):
breast_related_faces.add(face.index)
if breast_related_faces:
print(f"Subdividing {len(breast_related_faces)} breast-related faces...")
subdivide_faces(target_obj, list(breast_related_faces), cuts=1)
def apply_symmetric_field_delta(target_obj, field_data_path, blend_shape_labels=None, clothing_avatar_data=None, base_avatar_data=None, subdivision=True, shape_key_name="SymmetricDeformed", skip_blend_shape_generation=False, config_data=None, ignore_blendshape=None):
"""
保存された対称Deformation Field差分データを読み込みメッシュに適用する(最適化版、多段階対応)。
※BlendShape用のDeformation Fieldを先に適用した場合と、メインのみ適用した場合の交差面の割合を
比較し、所定の条件下ではBlendShapeの変位を無視する処理を行います。
"""
# Transitionキャッシュを初期化
transition_cache = TransitionCache()
deferred_transitions = [] # 遅延実行するTransitionのリスト
MAX_ITERATIONS = 0 # 最大繰り返し回数
# メインの処理ループ(従来の単一ステップ処理)
iteration = 0
shape_key = None
basis_field_path = os.path.join(os.path.dirname(field_data_path), field_data_path)
while iteration <= MAX_ITERATIONS:
original_shape_key_state = save_shape_key_state(target_obj)
print(f"selected field_data_path: {basis_field_path}")
# シェイプキーを作成して変形を適用
if shape_key:
target_obj.shape_key_remove(shape_key)
shape_key = process_field_deformation(target_obj, basis_field_path, blend_shape_labels, clothing_avatar_data, shape_key_name, ignore_blendshape)
restore_shape_key_state(target_obj, original_shape_key_state)
# Basis遷移を遅延実行リストに追加
if config_data:
deferred_transitions.append({
'target_obj': target_obj,
'config_data': config_data,
'target_label': 'Basis',
'target_shape_key_name': shape_key_name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'save_original_shape_key': False
})
# 新たな交差を検出
intersections = find_intersecting_faces_bvh(target_obj)
print(f"Iteration {iteration + 1}: Intersecting faces: {len(intersections)}")
if not subdivision:
print("Subdivision skipped")
break
if not intersections:
print("No intersections detected")
break
if iteration == MAX_ITERATIONS:
print("Maximum iterations reached")
break
# 新たな交差が検出された場合、それらの面を細分化
# subdivide_faces(target_obj, intersections)
iteration += 1
# configファイルのblendShapeFieldsを処理するためのラベルセットを作成
config_blend_shape_labels = set()
config_generated_shape_keys = {} # 後続処理の対象外にするシェイプキー名を保存
additional_shape_keys = set() # 追加で処理するシェイプキー名を保存
non_relative_shape_keys = set() # 相対的な変位を持たないシェイプキー名を保存
skipped_shape_keys = set()
label_to_target_shape_key_name = {'Basis': shape_key_name}
# 1. configファイルのblendShapeFieldsを先に処理
if config_data and "blendShapeFields" in config_data:
print("Processing config blendShapeFields...")
for blend_field in config_data["blendShapeFields"]:
label = blend_field["label"]
source_label = blend_field["sourceLabel"]
field_path = os.path.join(os.path.dirname(field_data_path), blend_field["path"])
print(f"selected field_path: {field_path}")
source_blend_shape_settings = blend_field.get("sourceBlendShapeSettings", [])
if (blend_shape_labels is None or source_label not in blend_shape_labels) and source_label not in target_obj.data.shape_keys.key_blocks:
print(f"Skipping {label} - source label {source_label} not in shape keys")
skipped_shape_keys.add(label)
continue
# マスクウェイトを取得
mask_bones = blend_field.get("maskBones", [])
mask_weights = None
if mask_bones:
mask_weights = create_blendshape_mask(target_obj, mask_bones, clothing_avatar_data, field_name=label, store_debug_mask=True)
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
# 対象メッシュオブジェクトの元のシェイプキー設定を保存
original_shape_key_state = save_shape_key_state(target_obj)
# すべてのシェイプキーの値を0にする
if target_obj.data.shape_keys:
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 最初のConfig Pairでの対象シェイプキー(1が前提)もしくは前のConfig PairでTransition後のシェイプキーの値を1にする
if clothing_avatar_data["name"] == "Template":
if target_obj.data.shape_keys:
if source_label in target_obj.data.shape_keys.key_blocks:
source_shape_key = target_obj.data.shape_keys.key_blocks.get(source_label)
source_shape_key.value = 1.0
print(f"source_label: {source_label} is found in shape keys")
else:
temp_shape_key_name = f"{source_label}_temp"
if temp_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_obj.data.shape_keys.key_blocks[temp_shape_key_name].value = 1.0
print(f"temp_shape_key_name: {temp_shape_key_name} is found in shape keys")
else:
# source_blend_shape_settingsを適用
for source_blend_shape_setting in source_blend_shape_settings:
source_blend_shape_name = source_blend_shape_setting.get("name", "")
source_blend_shape_value = source_blend_shape_setting.get("value", 0.0)
if source_blend_shape_name in target_obj.data.shape_keys.key_blocks:
source_blend_shape_key = target_obj.data.shape_keys.key_blocks.get(source_blend_shape_name)
source_blend_shape_key.value = source_blend_shape_value
print(f"source_blend_shape_name: {source_blend_shape_name} is found in shape keys")
else:
temp_blend_shape_key_name = f"{source_blend_shape_name}_temp"
if temp_blend_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_obj.data.shape_keys.key_blocks[temp_blend_shape_key_name].value = source_blend_shape_value
print(f"temp_blend_shape_key_name: {temp_blend_shape_key_name} is found in shape keys")
# blend_shape_key_nameを設定(同名のシェイプキーがある場合は_generatedを付ける)
blend_shape_key_name = label
if target_obj.data.shape_keys and label in target_obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
# process_field_deformationを実行
if os.path.exists(field_path):
print(f"Processing config blend shape field: {label} -> {blend_shape_key_name}")
generated_shape_key = process_field_deformation(target_obj, field_path, blend_shape_labels, clothing_avatar_data, blend_shape_key_name, ignore_blendshape)
# 該当するラベルの遷移を遅延実行リストに追加
if config_data and generated_shape_key:
deferred_transitions.append({
'target_obj': target_obj,
'config_data': config_data,
'target_label': label,
'target_shape_key_name': generated_shape_key.name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'save_original_shape_key': False
})
# 生成されたシェイプキーの値を0にする
if generated_shape_key:
generated_shape_key.value = 0.0
config_generated_shape_keys[generated_shape_key.name] = mask_weights
non_relative_shape_keys.add(generated_shape_key.name)
config_blend_shape_labels.add(label)
label_to_target_shape_key_name[label] = generated_shape_key.name
else:
print(f"Warning: Config blend shape field file not found: {field_path}")
# 元のシェイプキー設定を復元
restore_shape_key_state(target_obj, original_shape_key_state)
# transition_setsに含まれるがconfig_blend_shape_labelsに含まれないシェイプキーに対して処理
if config_data and config_data.get('blend_shape_transition_sets', []):
transition_sets = config_data.get('blend_shape_transition_sets', [])
print("Processing skipped config blendShapeFields...")
for transition_set in transition_sets:
label = transition_set["label"]
if label in config_blend_shape_labels or label == 'Basis':
continue
source_label = get_source_label(label, config_data)
if source_label not in label_to_target_shape_key_name:
print(f"Skipping {label} - source label {source_label} not in label_to_target_shape_key_name")
continue
print(f"Processing skipped config blendShapeField: {label}")
# マスクウェイトを取得
mask_bones = transition_set.get("mask_bones", [])
print(f"mask_bones: {mask_bones}")
mask_weights = None
if mask_bones:
mask_weights = create_blendshape_mask(target_obj, mask_bones, clothing_avatar_data, field_name=label, store_debug_mask=True)
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
target_shape_key_name = label_to_target_shape_key_name[source_label]
target_shape_key = target_obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if not target_shape_key:
print(f"Skipping {label} - target shape key {target_shape_key_name} not found")
continue
# target_shape_key_nameで指定されるシェイプキーのコピーを作成
blend_shape_key_name = label
if target_obj.data.shape_keys and label in target_obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
skipped_blend_shape_key = target_obj.shape_key_add(name=blend_shape_key_name)
for i in range(len(skipped_blend_shape_key.data)):
skipped_blend_shape_key.data[i].co = target_shape_key.data[i].co.copy()
print(f"skipped_blend_shape_key: {skipped_blend_shape_key.name}")
if config_data and skipped_blend_shape_key:
deferred_transitions.append({
'target_obj': target_obj,
'config_data': config_data,
'target_label': label,
'target_shape_key_name': skipped_blend_shape_key.name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'save_original_shape_key': False
})
print(f"Added deferred transition: {label} -> {skipped_blend_shape_key.name}")
config_generated_shape_keys[skipped_blend_shape_key.name] = mask_weights
non_relative_shape_keys.add(skipped_blend_shape_key.name)
config_blend_shape_labels.add(label)
label_to_target_shape_key_name[label] = skipped_blend_shape_key.name
# 2. clothing_avatar_dataのblendshapesに含まれないシェイプキーに対して処理
if target_obj.data.shape_keys:
# clothing_avatar_dataからblendshapeのリストを作成
clothing_blendshapes = set()
if clothing_avatar_data and "blendshapes" in clothing_avatar_data:
for blendshape in clothing_avatar_data["blendshapes"]:
clothing_blendshapes.add(blendshape["name"])
# 各シェイプキーについて処理
for key_block in target_obj.data.shape_keys.key_blocks:
if (key_block.name == "Basis" or
key_block.name in clothing_blendshapes or
key_block == shape_key or
key_block.name.endswith("_BaseShape") or
key_block.name in config_generated_shape_keys.keys() or
key_block.name in config_blend_shape_labels or
key_block.name.endswith("_original") or
key_block.name.endswith("_generated") or
key_block.name.endswith("_temp")):
continue # Basisまたはclothing_avatar_dataのblendshapesに含まれるもの、または_BaseShapeで終わるもの、またはconfigで生成されたものはスキップ
print(f"Processing additional shape key: {key_block.name}")
original_shape_key_state = save_shape_key_state(target_obj)
# すべてのシェイプキーの値を0に設定
for sk in target_obj.data.shape_keys.key_blocks:
sk.value = 0.0
basis_field_path2 = os.path.join(os.path.dirname(field_data_path), field_data_path)
source_label = get_source_label('Basis', config_data)
if source_label is not None and source_label != 'Basis' and target_obj.data.shape_keys:
source_field_path = None
source_shape_name = None
if config_data and "blendShapeFields" in config_data:
for blend_field in config_data["blendShapeFields"]:
if blend_field["label"] == source_label:
source_field_path = os.path.join(os.path.dirname(field_data_path), blend_field["path"])
source_shape_name = blend_field["sourceLabel"]
break
if source_field_path is not None and source_shape_name is not None:
if source_shape_name in target_obj.data.shape_keys.key_blocks:
source_shape_key = target_obj.data.shape_keys.key_blocks.get(source_shape_name)
source_shape_key.value = 1.0
basis_field_path2 = source_field_path
print(f"source_label: {source_shape_name} is found in shape keys")
else:
temp_shape_key_name = f"{source_shape_name}_temp"
if temp_shape_key_name in target_obj.data.shape_keys.key_blocks:
target_obj.data.shape_keys.key_blocks[temp_shape_key_name].value = 1.0
basis_field_path2 = source_field_path
print(f"temp_shape_key_name: {temp_shape_key_name} is found in shape keys")
print(f"basis_field_path2: {basis_field_path2}")
# 対象のシェイプキーの値を1に設定
key_block.value = 1.0
temp_blend_shape_key_name = f"{key_block.name}_generated"
temp_shape_key = process_field_deformation(target_obj, basis_field_path2, blend_shape_labels, clothing_avatar_data, temp_blend_shape_key_name, ignore_blendshape)
additional_shape_keys.add(temp_shape_key.name)
non_relative_shape_keys.add(temp_shape_key.name)
# シェイプキーの値を元に戻す
key_block.value = 0.0
restore_shape_key_state(target_obj, original_shape_key_state)
# 遅延されたTransitionをキャッシュシステムと共に実行
non_transitioned_shape_vertices = None
created_shape_key_mask_weights = {}
shape_keys_to_remove = []
if deferred_transitions:
transition_operations, created_shape_key_mask_weights, used_shape_key_names = execute_transitions_with_cache(deferred_transitions, transition_cache, target_obj)
for transition_operation in transition_operations:
if transition_operation['transition_data']['target_label'] == 'Basis':
non_transitioned_shape_vertices = [Vector(v) for v in transition_operation['initial_vertices']]
break
if used_shape_key_names:
for config_shape_key_name in config_generated_shape_keys:
if config_shape_key_name not in used_shape_key_names and config_shape_key_name in target_obj.data.shape_keys.key_blocks:
shape_keys_to_remove.append(config_shape_key_name)
for created_shape_key_name, mask_weights in created_shape_key_mask_weights.items():
if created_shape_key_name in target_obj.data.shape_keys.key_blocks:
config_generated_shape_keys[created_shape_key_name] = mask_weights
non_relative_shape_keys.add(created_shape_key_name)
config_blend_shape_labels.add(created_shape_key_name)
label_to_target_shape_key_name[created_shape_key_name] = created_shape_key_name
print(f"Added created shape key: {created_shape_key_name}")
shape_key.value = 1.0
# base_avatar_dataのblendShapeFieldsを処理する前の準備
basis_name = 'Basis'
basis_index = target_obj.data.shape_keys.key_blocks.find(basis_name)
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = target_obj
target_obj.select_set(True)
if non_transitioned_shape_vertices:
for additionalshape_key_name in additional_shape_keys:
if additionalshape_key_name in target_obj.data.shape_keys.key_blocks:
additional_shape_key = target_obj.data.shape_keys.key_blocks.get(additionalshape_key_name)
# shape_keyとtransition前のBasisのシェイプの差をadditional_shape_keyの各頂点に追加
for i, vert in enumerate(additional_shape_key.data):
# shape_keyとBasisの差分を計算
shape_diff = shape_key.data[i].co - non_transitioned_shape_vertices[i]
# additional_shape_keyの頂点座標に差分を追加
additional_shape_key.data[i].co += shape_diff
else:
print(f"Warning: {additionalshape_key_name} is not found in shape keys")
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
print(f"Shape keys in {target_obj.name}:")
for key_block in target_obj.data.shape_keys.key_blocks:
print(f"- {key_block.name} (value: {key_block.value})")
original_shape_key_name = f"{shape_key_name}_original"
for sk in target_obj.data.shape_keys.key_blocks:
if sk.name in non_relative_shape_keys and sk.name != basis_name:
if shape_key_name in target_obj.data.shape_keys.key_blocks:
target_obj.active_shape_key_index = target_obj.data.shape_keys.key_blocks.find(sk.name)
bpy.ops.mesh.blend_from_shape(shape=shape_key_name, blend=-1, add=True)
else:
print(f"Warning: {shape_key_name} or {shape_key_name}_original is not found in shape keys")
bpy.context.object.active_shape_key_index = basis_index
bpy.ops.mesh.blend_from_shape(shape=shape_key_name, blend=1, add=True)
bpy.ops.object.mode_set(mode='OBJECT')
if original_shape_key_name in target_obj.data.shape_keys.key_blocks:
original_shape_key = target_obj.data.shape_keys.key_blocks.get(original_shape_key_name)
target_obj.shape_key_remove(original_shape_key)
print(f"Removed shape key: {original_shape_key_name} from {target_obj.name}")
# # 不要なシェイプキーを削除
if shape_key:
target_obj.shape_key_remove(shape_key)
for unused_shape_key_name in shape_keys_to_remove:
if unused_shape_key_name in target_obj.data.shape_keys.key_blocks:
unused_shape_key = target_obj.data.shape_keys.key_blocks.get(unused_shape_key_name)
if unused_shape_key:
target_obj.shape_key_remove(unused_shape_key)
print(f"Removed shape key: {unused_shape_key_name} from {target_obj.name}")
else:
print(f"Warning: {unused_shape_key_name} is not found in shape keys")
else:
print(f"Warning: {unused_shape_key_name} is not found in shape keys")
# configファイルのblendShapeFieldsで生成されたシェイプキーの変位にmask_weightsを適用
if config_generated_shape_keys:
print(f"Applying mask weights to generated shape keys: {list(config_generated_shape_keys.keys())}")
# ベースシェイプの頂点位置を取得
basis_shape_key = target_obj.data.shape_keys.key_blocks.get(basis_name)
if basis_shape_key:
basis_positions = np.array([v.co for v in basis_shape_key.data])
# 各生成されたシェイプキーに対してマスクを適用
for shape_key_name_to_mask, mask_weights in config_generated_shape_keys.items():
if shape_key_name_to_mask == basis_name:
continue
shape_key_to_mask = target_obj.data.shape_keys.key_blocks.get(shape_key_name_to_mask)
if shape_key_to_mask:
# 現在のシェイプキーの頂点位置を取得
shape_positions = np.array([v.co for v in shape_key_to_mask.data])
# 変位を計算
displacement = shape_positions - basis_positions
# マスクを適用(変位にmask_weightsを掛ける)
if mask_weights is not None:
masked_displacement = displacement * mask_weights[:, np.newaxis]
else:
masked_displacement = displacement
# マスク適用後の位置を計算
new_positions = basis_positions + masked_displacement
# シェイプキーの頂点位置を更新
for i, vertex in enumerate(shape_key_to_mask.data):
vertex.co = new_positions[i]
print(f"Applied mask weights to shape key: {shape_key_name_to_mask}")
# 4. base_avatar_dataのblendShapeFieldsを処理(configのlabelと一致するものはスキップ)
if base_avatar_data and "blendShapeFields" in base_avatar_data and not skip_blend_shape_generation:
# アーマチュアの取得
armature_obj = get_armature_from_modifier(target_obj)
if not armature_obj:
raise ValueError("Armatureモディファイアが見つかりません")
# 対象メッシュオブジェクトの元のシェイプキー設定を保存
original_shape_key_state = save_shape_key_state(target_obj)
# すべてのシェイプキーの値を0にする
if target_obj.data.shape_keys:
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 評価されたメッシュの頂点位置を取得(シェイプキーA適用後)
depsgraph = bpy.context.evaluated_depsgraph_get()
depsgraph.update()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
vertices = np.array([v.co for v in target_obj.data.vertices]) # オリジナルの頂点配列
deformed_vertices = np.array([v.co for v in eval_mesh.vertices])
# 各blendShapeFieldを処理
for blend_field in base_avatar_data["blendShapeFields"]:
label = blend_field["label"]
# configファイルのblendShapeFieldsのlabelと一致する場合はスキップ
if label in config_blend_shape_labels:
print(f"Skipping base avatar blend shape field '{label}' (already processed from config)")
continue
field_path = os.path.join(os.path.dirname(field_data_path), blend_field["filePath"])
if os.path.exists(field_path):
print(f"Applying blend shape field for {label}")
# フィールドデータの読み込み
field_info_blend = get_deformation_field_multi_step(field_path)
blend_points = field_info_blend['all_field_points']
blend_deltas = field_info_blend['all_delta_positions']
blend_field_weights = field_info_blend['field_weights']
blend_matrix = field_info_blend['world_matrix']
blend_matrix_inv = field_info_blend['world_matrix_inv']
blend_k_neighbors = field_info_blend['kdtree_query_k']
# マスクウェイトを取得
mask_weights = None
if "maskBones" in blend_field:
mask_weights = create_blendshape_mask(target_obj, blend_field["maskBones"], clothing_avatar_data, field_name=label, store_debug_mask=True)
# 変形後の位置を計算
deformed_positions = batch_process_vertices_multi_step(
deformed_vertices,
blend_points,
blend_deltas,
blend_field_weights,
blend_matrix,
blend_matrix_inv,
target_obj.matrix_world,
target_obj.matrix_world.inverted(),
mask_weights,
batch_size=1000,
k=blend_k_neighbors
)
# 変位が0かどうかをワールド座標でチェック
has_displacement = False
for i in range(len(deformed_vertices)):
displacement = deformed_positions[i] - (target_obj.matrix_world @ Vector(deformed_vertices[i]))
if np.any(np.abs(displacement) > 1e-5): # 微小な変位は無視
print(f"blendShapeFields {label} world_displacement: {displacement}")
has_displacement = True
break
# 変位が存在する場合のみシェイプキーを作成
if has_displacement:
blend_shape_key_name = label
if target_obj.data.shape_keys and label in target_obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
# シェイプキーを作成
shape_key_b = target_obj.shape_key_add(name=blend_shape_key_name)
shape_key_b.value = 0.0 # 初期値は0
# シェイプキーに頂点位置を保存
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(vertices)):
matrix_armature_inv = calculate_inverse_pose_matrix(target_obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
# 変形後の位置をローカル座標に変換
deformed_world_pos = matrix_armature_inv @ Vector(deformed_positions[i])
deformed_local_pos = target_obj.matrix_world.inverted() @ deformed_world_pos
shape_key_b.data[i].co = deformed_local_pos
matrix_armature_inv_fallback = matrix_armature_inv
else:
print(f"Skipping creation of shape key '{label}' as it has no displacement")
else:
print(f"Warning: Field file not found for blend shape {label}")
# 元のシェイプキー設定を復元
restore_shape_key_state(target_obj, original_shape_key_state)
# すべてのシェイプキーの値を元に戻す
for sk in target_obj.data.shape_keys.key_blocks:
sk.value = 0.0
def apply_field_delta_with_rigid_transform_single(obj, field_data_path, blend_shape_labels=None, clothing_avatar_data=None, shape_key_name="RigidTransformed"):
used_shape_keys = []
if blend_shape_labels and clothing_avatar_data:
# 事前に作成されたシェイプキーから頂点位置を取得
for label in blend_shape_labels:
# 衣装モデルに同名のシェイプキーがある場合は適用しない
if obj.data.shape_keys and label in obj.data.shape_keys.key_blocks:
print(f"Skipping {label} - already has shape key")
continue
target_avatar_base_shape_key_name = f"{label}_BaseShape"
if obj.data.shape_keys and target_avatar_base_shape_key_name in obj.data.shape_keys.key_blocks:
target_avatar_base_shape_key = obj.data.shape_keys.key_blocks[target_avatar_base_shape_key_name]
target_avatar_base_shape_key.value = 1.0
print(f"Using shape key {target_avatar_base_shape_key_name} for BlendShape deformation")
used_shape_keys.append(target_avatar_base_shape_key_name)
else:
print(f"Warning: Shape key {target_avatar_base_shape_key_name} not found")
# 評価済みメッシュから頂点位置(元の状態)を取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
original_positions = np.array([v.co for v in eval_mesh.vertices])
current_positions = original_positions.copy()
# メインの Deformation Field を適用
field_info = get_deformation_field_multi_step(field_data_path)
field_points = field_info['all_field_points']
delta_positions = field_info['all_delta_positions']
field_weights = field_info['field_weights']
field_matrix = field_info['world_matrix']
field_matrix_inv = field_info['world_matrix_inv']
k_neighbors = field_info['kdtree_query_k']
# Deformation Field に基づく変形位置を計算
deformed_positions = batch_process_vertices_multi_step(
current_positions,
field_points,
delta_positions,
field_weights,
field_matrix,
field_matrix_inv,
obj.matrix_world,
obj.matrix_world.inverted(),
None,
batch_size=1000,
k=k_neighbors
)
# numpy配列に変換
source_points = np.array([obj.matrix_world @ Vector(v) for v in current_positions])
target_points = np.array(deformed_positions)
# # DistanceWeight頂点グループからの影響度を取得
#influence_factors = get_distance_weight_influence_factors(obj, 0.5)
#s, R, t = calculate_optimal_similarity_transform_weighted(source_points, target_points, influence_factors)
s, R, t = calculate_optimal_similarity_transform(source_points, target_points)
# 相似変換を適用した結果を計算
similarity_transformed = apply_similarity_transform_to_points(source_points, s, R, t)
for label in used_shape_keys:
obj.data.shape_keys.key_blocks[label].value = 0.0
# シェイプキーを作成
if obj.data.shape_keys is None:
obj.shape_key_add(name='Basis')
if obj.data.shape_keys and shape_key_name in obj.data.shape_keys.key_blocks:
shape_key = obj.data.shape_keys.key_blocks[shape_key_name]
else:
shape_key = obj.shape_key_add(name=shape_key_name)
shape_key.value = 1.0
# アーマチュアを取得
armature_obj = get_armature_from_modifier(obj)
if not armature_obj:
raise ValueError("Armatureモディファイアが見つかりません")
# シェイプキーに頂点位置を設定
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(current_positions)):
matrix_armature_inv = calculate_inverse_pose_matrix(obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
undeformed_world_pos = matrix_armature_inv @ Vector(similarity_transformed[i])
local_pos = obj.matrix_world.inverted() @ undeformed_world_pos
shape_key.data[i].co = local_pos
matrix_armature_inv_fallback = matrix_armature_inv
return shape_key
def apply_field_delta_with_rigid_transform(obj, field_data_path, blend_shape_labels=None, base_avatar_data=None, clothing_avatar_data=None, shape_key_name="RigidTransformed", influence_range=1.0, config_data=None, overwrite_base_shape_key=True):
"""
保存された対称Deformation Field差分データを読み込み、最適な剛体変換として適用する(多段階対応)
Parameters:
obj: 対象メッシュオブジェクト
field_data_path: Deformation Fieldのパス
blend_shape_labels: 適用するブレンドシェイプのラベルリスト(オプション)
base_avatar_data: ベースアバターデータ(オプション)
clothing_avatar_data: 衣装アバターデータ(オプション)
shape_key_name: 作成するシェイプキーの名前
influence_range: DistanceWeight頂点グループによる影響度の範囲(0.0-1.0、デフォルト0.5)
Returns:
シェイプキー
"""
# Transitionキャッシュを初期化
transition_cache = TransitionCache()
deferred_transitions = [] # 遅延実行するTransitionのリスト
original_shape_key_state = save_shape_key_state(obj)
if obj.data.shape_keys:
for sk in obj.data.shape_keys.key_blocks:
sk.value = 0.0
basis_field_path = os.path.join(os.path.dirname(field_data_path), field_data_path)
print(f"selected field_data_path: {basis_field_path}")
shape_key = apply_field_delta_with_rigid_transform_single(obj, basis_field_path, blend_shape_labels, clothing_avatar_data, shape_key_name)
# Basis遷移を遅延実行リストに追加
if config_data:
deferred_transitions.append({
'target_obj': obj,
'config_data': config_data,
'target_label': 'Basis',
'target_shape_key_name': shape_key_name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'base_avatar_data': base_avatar_data,
'save_original_shape_key': True
})
restore_shape_key_state(obj, original_shape_key_state)
# configファイルのblendShapeFieldsを処理するためのラベルセットを作成
config_blend_shape_labels = set()
config_generated_shape_keys = {} # 後続処理の対象外にするシェイプキー名を保存
non_relative_shape_keys = set() # 相対的な変位を持たないシェイプキー名を保存
skipped_shape_keys = set()
label_to_target_shape_key_name = {'Basis': shape_key_name}
# 1. configファイルのblendShapeFieldsを先に処理
if config_data and "blendShapeFields" in config_data:
print("Processing config blendShapeFields (rigid transform)...")
for blend_field in config_data["blendShapeFields"]:
label = blend_field["label"]
source_label = blend_field["sourceLabel"]
field_path = os.path.join(os.path.dirname(field_data_path), blend_field["path"])
print(f"selected field_path: {field_path}")
source_blend_shape_settings = blend_field.get("sourceBlendShapeSettings", [])
if (blend_shape_labels is None or source_label not in blend_shape_labels) and source_label not in obj.data.shape_keys.key_blocks:
print(f"Skipping {label} - source label {source_label} not in shape keys")
skipped_shape_keys.add(label)
continue
# マスクウェイトを取得
mask_bones = blend_field.get("maskBones", [])
mask_weights = None
if mask_bones:
mask_weights = create_blendshape_mask(obj, mask_bones, clothing_avatar_data, field_name=label, store_debug_mask=True)
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
# 対象メッシュオブジェクトの元のシェイプキー設定を保存
original_shape_key_state = save_shape_key_state(obj)
# すべてのシェイプキーの値を0にする
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 最初のConfig Pairでの対象シェイプキー(1が前提)もしくは前のConfig PairでTransition後のシェイプキーの値を1にする
if clothing_avatar_data["name"] == "Template":
if obj.data.shape_keys:
if source_label in obj.data.shape_keys.key_blocks:
source_shape_key = obj.data.shape_keys.key_blocks.get(source_label)
source_shape_key.value = 1.0
print(f"source_label: {source_label} is found in shape keys")
else:
temp_shape_key_name = f"{source_label}_temp"
if temp_shape_key_name in obj.data.shape_keys.key_blocks:
obj.data.shape_keys.key_blocks[temp_shape_key_name].value = 1.0
print(f"temp_shape_key_name: {temp_shape_key_name} is found in shape keys")
else:
# source_blend_shape_settingsを適用
for source_blend_shape_setting in source_blend_shape_settings:
source_blend_shape_name = source_blend_shape_setting.get("name", "")
source_blend_shape_value = source_blend_shape_setting.get("value", 0.0)
if source_blend_shape_name in obj.data.shape_keys.key_blocks:
source_blend_shape_key = obj.data.shape_keys.key_blocks.get(source_blend_shape_name)
source_blend_shape_key.value = source_blend_shape_value
print(f"source_blend_shape_name: {source_blend_shape_name} is found in shape keys")
else:
temp_blend_shape_key_name = f"{source_blend_shape_name}_temp"
if temp_blend_shape_key_name in obj.data.shape_keys.key_blocks:
obj.data.shape_keys.key_blocks[temp_blend_shape_key_name].value = source_blend_shape_value
print(f"temp_blend_shape_key_name: {temp_blend_shape_key_name} is found in shape keys")
# blend_shape_key_nameを設定(同名のシェイプキーがある場合は_generatedを付ける)
blend_shape_key_name = label
if obj.data.shape_keys and label in obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
if os.path.exists(field_path):
print(f"Processing config blend shape field with rigid transform: {label} -> {blend_shape_key_name}")
generated_shape_key = apply_field_delta_with_rigid_transform_single(obj, field_path, blend_shape_labels, clothing_avatar_data, blend_shape_key_name)
# 該当するラベルの遷移を遅延実行リストに追加
if config_data and generated_shape_key:
deferred_transitions.append({
'target_obj': obj,
'config_data': config_data,
'target_label': label,
'target_shape_key_name': generated_shape_key.name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'base_avatar_data': base_avatar_data,
'save_original_shape_key': False
})
# 生成されたシェイプキーの値を0にする
if generated_shape_key:
generated_shape_key.value = 0.0
config_generated_shape_keys[generated_shape_key.name] = mask_weights
non_relative_shape_keys.add(generated_shape_key.name)
config_blend_shape_labels.add(label)
label_to_target_shape_key_name[label] = generated_shape_key.name
else:
print(f"Warning: Config blend shape field file not found: {field_path}")
# 元のシェイプキー設定を復元
restore_shape_key_state(obj, original_shape_key_state)
# transition_setsに含まれるがconfig_blend_shape_labelsに含まれないシェイプキーに対して処理
if config_data and config_data.get('blend_shape_transition_sets', []):
print("Processing skipped config blendShapeFields...")
transition_sets = config_data.get('blend_shape_transition_sets', [])
for transition_set in transition_sets:
label = transition_set["label"]
if label in config_blend_shape_labels or label == 'Basis':
continue
source_label = get_source_label(label, config_data)
if source_label not in label_to_target_shape_key_name:
print(f"Skipping {label} - source label {source_label} not in label_to_target_shape_key_name")
continue
print(f"Processing skipped config blendShapeField: {label}")
# マスクウェイトを取得
mask_bones = transition_set.get("mask_bones", [])
print(f"mask_bones: {mask_bones}")
mask_weights = None
if mask_bones:
mask_weights = create_blendshape_mask(obj, mask_bones, clothing_avatar_data, field_name=label, store_debug_mask=True)
if mask_weights is not None and np.all(mask_weights == 0):
print(f"Skipping {label} - all mask weights are zero")
continue
target_shape_key_name = label_to_target_shape_key_name[source_label]
target_shape_key = obj.data.shape_keys.key_blocks.get(target_shape_key_name)
if not target_shape_key:
print(f"Skipping {label} - target shape key {target_shape_key_name} not found")
continue
# target_shape_key_nameで指定されるシェイプキーのコピーを作成
blend_shape_key_name = label
if obj.data.shape_keys and label in obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
skipped_blend_shape_key = obj.shape_key_add(name=blend_shape_key_name)
for i in range(len(skipped_blend_shape_key.data)):
skipped_blend_shape_key.data[i].co = target_shape_key.data[i].co.copy()
print(f"skipped_blend_shape_key: {skipped_blend_shape_key.name}")
if config_data and skipped_blend_shape_key:
deferred_transitions.append({
'target_obj': obj,
'config_data': config_data,
'target_label': label,
'target_shape_key_name': skipped_blend_shape_key.name,
'base_avatar_data': base_avatar_data,
'clothing_avatar_data': clothing_avatar_data,
'save_original_shape_key': False
})
print(f"Added deferred transition: {label} -> {skipped_blend_shape_key.name}")
config_generated_shape_keys[skipped_blend_shape_key.name] = mask_weights
non_relative_shape_keys.add(skipped_blend_shape_key.name)
config_blend_shape_labels.add(label)
label_to_target_shape_key_name[label] = skipped_blend_shape_key.name
# 2. clothing_avatar_dataのblendshapesに含まれないシェイプキーに対して処理 (現在はコピーのみ行う)
if obj.data.shape_keys:
# clothing_avatar_dataからblendshapeのリストを作成
clothing_blendshapes = set()
if clothing_avatar_data and "blendshapes" in clothing_avatar_data:
for blendshape in clothing_avatar_data["blendshapes"]:
clothing_blendshapes.add(blendshape["name"])
# 各シェイプキーについて処理
for key_block in obj.data.shape_keys.key_blocks:
if (key_block.name == "Basis" or
key_block.name in clothing_blendshapes or
key_block == shape_key or
key_block.name.endswith("_BaseShape") or
key_block.name in config_generated_shape_keys.keys() or
key_block.name in config_blend_shape_labels or
key_block.name.endswith("_original") or
key_block.name.endswith("_generated") or
key_block.name.endswith("_temp")):
continue # Basisまたはclothing_avatar_dataのblendshapesに含まれるもの、または_BaseShapeで終わるもの、またはconfigで生成されたものはスキップ
print(f"Processing additional shape key: {key_block.name}")
temp_blend_shape_key_name = f"{key_block.name}_generated"
if temp_blend_shape_key_name in obj.data.shape_keys.key_blocks:
temp_shape_key = obj.data.shape_keys.key_blocks[temp_blend_shape_key_name]
else:
temp_shape_key = obj.shape_key_add(name=temp_blend_shape_key_name)
for i, vertex in enumerate(temp_shape_key.data):
vertex.co = key_block.data[i].co.copy()
# 遅延されたTransitionをキャッシュシステムと共に実行
created_shape_key_mask_weights = {}
shape_keys_to_remove = []
if deferred_transitions:
transition_operations, created_shape_key_mask_weights, used_shape_key_names = execute_transitions_with_cache(deferred_transitions, transition_cache, obj, rigid_transformation=True)
if used_shape_key_names:
for config_shape_key_name in config_generated_shape_keys:
if config_shape_key_name not in used_shape_key_names and config_shape_key_name in obj.data.shape_keys.key_blocks:
shape_keys_to_remove.append(config_shape_key_name)
for created_shape_key_name, mask_weights in created_shape_key_mask_weights.items():
if created_shape_key_name in obj.data.shape_keys.key_blocks:
config_generated_shape_keys[created_shape_key_name] = mask_weights
non_relative_shape_keys.add(created_shape_key_name)
config_blend_shape_labels.add(created_shape_key_name)
label_to_target_shape_key_name[created_shape_key_name] = created_shape_key_name
print(f"Added created shape key: {created_shape_key_name}")
if overwrite_base_shape_key:
# base_avatar_dataのblendShapeFieldsを処理する前の準備
basis_name = 'Basis'
basis_index = obj.data.shape_keys.key_blocks.find(basis_name)
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
print(f"Shape keys in {obj.name}:")
for key_block in obj.data.shape_keys.key_blocks:
print(f"- {key_block.name} (value: {key_block.value})")
original_shape_key_name = f"{shape_key_name}_original"
for sk in obj.data.shape_keys.key_blocks:
if sk.name in non_relative_shape_keys and sk.name != basis_name:
if shape_key_name in obj.data.shape_keys.key_blocks:
obj.active_shape_key_index = obj.data.shape_keys.key_blocks.find(sk.name)
bpy.ops.mesh.blend_from_shape(shape=shape_key_name, blend=-1, add=True)
else:
print(f"Warning: {shape_key_name} or {shape_key_name}_original is not found in shape keys")
bpy.context.object.active_shape_key_index = basis_index
bpy.ops.mesh.blend_from_shape(shape=shape_key_name, blend=1, add=True)
bpy.ops.object.mode_set(mode='OBJECT')
if original_shape_key_name in obj.data.shape_keys.key_blocks:
original_shape_key = obj.data.shape_keys.key_blocks.get(original_shape_key_name)
obj.shape_key_remove(original_shape_key)
print(f"Removed shape key: {original_shape_key_name} from {obj.name}")
# 不要なシェイプキーを削除
if shape_key:
obj.shape_key_remove(shape_key)
# configファイルのblendShapeFieldsで生成されたシェイプキーの変位にmask_weightsを適用
if config_generated_shape_keys:
print(f"Applying mask weights to generated shape keys: {list(config_generated_shape_keys.keys())}")
# ベースシェイプの頂点位置を取得
basis_shape_key = obj.data.shape_keys.key_blocks.get(basis_name)
if basis_shape_key:
basis_positions = np.array([v.co for v in basis_shape_key.data])
# 各生成されたシェイプキーに対してマスクを適用
for shape_key_name_to_mask, mask_weights in config_generated_shape_keys.items():
if shape_key_name_to_mask == basis_name:
continue
shape_key_to_mask = obj.data.shape_keys.key_blocks.get(shape_key_name_to_mask)
if shape_key_to_mask:
# 現在のシェイプキーの頂点位置を取得
shape_positions = np.array([v.co for v in shape_key_to_mask.data])
# 変位を計算
displacement = shape_positions - basis_positions
# マスクを適用(変位にmask_weightsを掛ける)
if mask_weights is not None:
masked_displacement = displacement * mask_weights[:, np.newaxis]
else:
masked_displacement = displacement
# マスク適用後の位置を計算
new_positions = basis_positions + masked_displacement
# シェイプキーの頂点位置を更新
for i, vertex in enumerate(shape_key_to_mask.data):
vertex.co = new_positions[i]
print(f"Applied mask weights to shape key: {shape_key_name_to_mask}")
for unused_shape_key_name in shape_keys_to_remove:
if unused_shape_key_name in obj.data.shape_keys.key_blocks:
unused_shape_key = obj.data.shape_keys.key_blocks.get(unused_shape_key_name)
if unused_shape_key:
obj.shape_key_remove(unused_shape_key)
print(f"Removed shape key: {unused_shape_key_name} from {obj.name}")
else:
print(f"Warning: {unused_shape_key_name} is not found in shape keys")
else:
print(f"Warning: {unused_shape_key_name} is not found in shape keys")
return shape_key, config_blend_shape_labels
def process_blendshape_fields_with_rigid_transform(obj, field_data_path, base_avatar_data, clothing_avatar_data, config_blend_shape_labels, influence_range=1.0, config_data=None):
"""
base_avatar_dataのblendShapeFieldsを剛体変換を使用して処理する
Parameters:
obj: 対象メッシュオブジェクト
field_data_path: Deformation Fieldのパス
base_avatar_data: ベースアバターデータ
clothing_avatar_data: 衣装アバターデータ
influence_range: DistanceWeight頂点グループによる影響度の範囲(0.0-1.0、デフォルト0.5)
"""
# base_avatar_dataのblendShapeFieldsを処理
if base_avatar_data and "blendShapeFields" in base_avatar_data:
# アーマチュアの取得
armature_obj = get_armature_from_modifier(obj)
if not armature_obj:
raise ValueError("Armatureモディファイアが見つかりません")
# 対象メッシュオブジェクトの元のシェイプキー設定を保存
original_shape_key_state = save_shape_key_state(obj)
# すべてのシェイプキーの値を0にする
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 評価されたメッシュの頂点位置を取得(シェイプキーA適用後)
depsgraph = bpy.context.evaluated_depsgraph_get()
depsgraph.update()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
vertices = np.array([v.co for v in obj.data.vertices]) # オリジナルの頂点配列
deformed_vertices = np.array([v.co for v in eval_mesh.vertices])
# 各blendShapeFieldを剛体変換を使用して処理
for blend_field in base_avatar_data["blendShapeFields"]:
label = blend_field["label"]
# configファイルのblendShapeFieldsのlabelと一致する場合はスキップ
if label in config_blend_shape_labels:
print(f"Skipping base avatar blend shape field '{label}' (already processed from config)")
continue
field_path = os.path.join(os.path.dirname(field_data_path), blend_field["filePath"])
if os.path.exists(field_path):
print(f"Applying blend shape field for {label} with rigid transform")
# フィールドデータの読み込み
field_info_blend = get_deformation_field_multi_step(field_path)
blend_points = field_info_blend['all_field_points']
blend_deltas = field_info_blend['all_delta_positions']
blend_field_weights = field_info_blend['field_weights']
blend_matrix = field_info_blend['world_matrix']
blend_matrix_inv = field_info_blend['world_matrix_inv']
blend_k_neighbors = field_info_blend['kdtree_query_k']
# マスクウェイトを取得
mask_weights = None
if "maskBones" in blend_field:
mask_weights = create_blendshape_mask(obj, blend_field["maskBones"], clothing_avatar_data, field_name=label, store_debug_mask=True)
# 変形後の位置を計算
deformed_positions = batch_process_vertices_multi_step(
deformed_vertices,
blend_points,
blend_deltas,
blend_field_weights,
blend_matrix,
blend_matrix_inv,
obj.matrix_world,
obj.matrix_world.inverted(),
mask_weights,
batch_size=1000,
k=blend_k_neighbors
)
# 変位が0かどうかをワールド座標でチェック
has_displacement = False
for i in range(len(deformed_vertices)):
displacement = deformed_positions[i] - (obj.matrix_world @ Vector(deformed_vertices[i]))
if np.any(np.abs(displacement) > 1e-5): # 微小な変位は無視
print(f"blendShapeFields {label} world_displacement: {displacement}")
has_displacement = True
break
# 変位が存在する場合のみシェイプキーを作成
if has_displacement:
# ソースと変形後の点群から相似変換を計算
source_points = np.array([obj.matrix_world @ Vector(v) for v in deformed_vertices])
target_points = np.array(deformed_positions)
# # DistanceWeight頂点グループからの影響度を取得
# influence_factors = get_distance_weight_influence_factors(obj, influence_range)
# # 最適な相似変換を計算(重み付きまたは通常)
# if influence_factors is not None:
# print(f"Using weighted similarity transform with DistanceWeight vertex group for blend shape {label}")
# s, R, t = calculate_optimal_similarity_transform_weighted(source_points, target_points, influence_factors)
# else:
# s, R, t = calculate_optimal_similarity_transform(source_points, target_points)
s, R, t = calculate_optimal_similarity_transform(source_points, target_points)
# 相似変換を適用した結果を計算
similarity_transformed = apply_similarity_transform_to_points(source_points, s, R, t)
blend_shape_key_name = label
if obj.data.shape_keys and label in obj.data.shape_keys.key_blocks:
blend_shape_key_name = f"{label}_generated"
# シェイプキーを作成
shape_key_b = obj.shape_key_add(name=blend_shape_key_name)
shape_key_b.value = 0.0 # 初期値は0
# シェイプキーに頂点位置を保存
matrix_armature_inv_fallback = Matrix.Identity(4)
for i in range(len(vertices)):
matrix_armature_inv = calculate_inverse_pose_matrix(obj, armature_obj, i)
if matrix_armature_inv is None:
matrix_armature_inv = matrix_armature_inv_fallback
# 変形後の位置をローカル座標に変換
deformed_world_pos = matrix_armature_inv @ Vector(similarity_transformed[i])
deformed_local_pos = obj.matrix_world.inverted() @ deformed_world_pos
shape_key_b.data[i].co = deformed_local_pos
matrix_armature_inv_fallback = matrix_armature_inv
else:
print(f"Skipping creation of shape key '{label}' as it has no displacement")
else:
print(f"Warning: Field file not found for blend shape {label}")
# 元のシェイプキー設定を復元
restore_shape_key_state(obj, original_shape_key_state)
def calculate_obb_from_object(obj):
"""
オブジェクトのOriented Bounding Box (OBB)を計算する
Parameters:
obj: 対象のメッシュオブジェクト
Returns:
dict: OBBの情報(中心、軸、半径)
"""
# 評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# 頂点座標をワールド空間で取得
vertices = np.array([obj.matrix_world @ v.co for v in eval_mesh.vertices])
if len(vertices) == 0:
return None
# 頂点の平均位置(中心)を計算
center = np.mean(vertices, axis=0)
# 中心を原点に移動
centered_vertices = vertices - center
# 共分散行列を計算
covariance_matrix = np.cov(centered_vertices.T)
# 固有値と固有ベクトルを計算
eigenvalues, eigenvectors = np.linalg.eigh(covariance_matrix)
# 固有ベクトルを正規化
for i in range(3):
eigenvectors[:, i] = eigenvectors[:, i] / np.linalg.norm(eigenvectors[:, i])
# 各軸に沿った投影の最大値を計算
min_proj = np.full(3, float('inf'))
max_proj = np.full(3, float('-inf'))
for vertex in centered_vertices:
for i in range(3):
proj = np.dot(vertex, eigenvectors[:, i])
min_proj[i] = min(min_proj[i], proj)
max_proj[i] = max(max_proj[i], proj)
# 半径(各軸方向の長さの半分)を計算
radii = (max_proj - min_proj) / 2
# 中心位置を調整
adjusted_center = center + np.sum([(min_proj[i] + max_proj[i]) / 2 * eigenvectors[:, i] for i in range(3)], axis=0)
return {
'center': adjusted_center,
'axes': eigenvectors,
'radii': radii
}
def check_mesh_obb_intersection(mesh_obj, obb):
"""
メッシュとOBBの交差をチェックする
Parameters:
mesh_obj: チェック対象のメッシュオブジェクト
obb: OBB情報(中心、軸、半径)
Returns:
bool: 交差する場合はTrue
"""
if obb is None:
return False
# 評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = mesh_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# メッシュの頂点をOBB空間に変換して交差チェック
for v in eval_mesh.vertices:
# 頂点のワールド座標
vertex_world = mesh_obj.matrix_world @ v.co
# OBBの中心からの相対位置
relative_pos = vertex_world - Vector(obb['center'])
# OBBの各軸に沿った投影
projections = [abs(relative_pos.dot(Vector(obb['axes'][:, i]))) for i in range(3)]
# すべての軸で投影が半径以内なら交差
if all(proj <= radius for proj, radius in zip(projections, obb['radii'])):
return True
return False
def calculate_distance_based_weights(source_obj_name, target_obj_name, vertex_group_name="DistanceWeight", min_distance=0.0, max_distance=0.03):
"""
指定されたオブジェクトの各頂点から別のオブジェクトまでの最近接面距離を計測し、
距離に基づいて頂点ウェイトを設定する関数
Args:
source_obj_name (str): ウェイトを設定するオブジェクト名
target_obj_name (str): 距離計測対象のオブジェクト名
vertex_group_name (str): 作成する頂点グループ名
min_distance (float): 最小距離(ウェイト1.0になる距離)
max_distance (float): 最大距離(ウェイト0.0になる距離)
"""
# オブジェクトを取得
source_obj = bpy.data.objects.get(source_obj_name)
target_obj = bpy.data.objects.get(target_obj_name)
if not source_obj:
print(f"エラー: オブジェクト '{source_obj_name}' が見つかりません")
return False
if not target_obj:
print(f"エラー: オブジェクト '{target_obj_name}' が見つかりません")
return False
# メッシュデータを取得
source_mesh = source_obj.data
target_mesh = target_obj.data
# 頂点グループを作成または取得
if vertex_group_name not in source_obj.vertex_groups:
vertex_group = source_obj.vertex_groups.new(name=vertex_group_name)
else:
vertex_group = source_obj.vertex_groups[vertex_group_name]
# ターゲットオブジェクトのBVHTreeを作成
print("BVHTreeを構築中...")
# ターゲットメッシュのワールド座標での頂点とポリゴンを取得
target_verts = []
target_polys = []
# 評価されたメッシュを取得(モディファイアが適用された状態)
depsgraph = bpy.context.evaluated_depsgraph_get()
target_eval = target_obj.evaluated_get(depsgraph)
target_mesh_eval = target_eval.data
# ワールド座標に変換
target_matrix = target_obj.matrix_world
for vert in target_mesh_eval.vertices:
world_co = target_matrix @ vert.co
target_verts.append(world_co)
for poly in target_mesh_eval.polygons:
target_polys.append(poly.vertices)
# BVHTreeを構築
bvh = BVHTree.FromPolygons(target_verts, target_polys)
print("距離計算とウェイト設定中...")
# ソースオブジェクトの各頂点について処理
source_matrix = source_obj.matrix_world
source_eval = source_obj.evaluated_get(depsgraph)
source_mesh_eval = source_eval.data
weights = []
for i, vert in enumerate(source_mesh_eval.vertices):
# 頂点のワールド座標を取得
world_co = source_matrix @ vert.co
# 最近接面までの距離を計算
location, normal, index, distance = bvh.find_nearest(world_co)
if location is None:
print(f"警告: 頂点 {i} の最近接面が見つかりません")
distance = max_distance
# 距離に基づいてウェイトを計算
if distance <= min_distance:
weight = 1.0
elif distance >= max_distance:
weight = 0.0
else:
# 線形補間でウェイトを計算(max_distanceに近づくほど0に近づく)
weight = 1.0 - ((distance - min_distance) / (max_distance - min_distance))
weights.append(weight)
# 頂点グループにウェイトを設定
vertex_group.add([i], weight, 'REPLACE')
print(f"完了: {len(weights)} 個の頂点にウェイトを設定しました")
print(f"最小ウェイト: {min(weights):.4f}")
print(f"最大ウェイト: {max(weights):.4f}")
print(f"平均ウェイト: {np.mean(weights):.4f}")
return True
def process_mesh_with_connected_components_inline(obj, field_data_path, blend_shape_labels, clothing_avatar_data, base_avatar_data, clothing_armature, cloth_metadata=None, subdivision=True, skip_blend_shape_generation=False, config_data=None):
"""
メッシュを連結成分ごとに処理し、適切な変形を適用した後、
元のオブジェクトのままで結果を統合する
Parameters:
obj: 処理対象のメッシュオブジェクト
field_data_path: Deformation Fieldのパス
blend_shape_labels: 適用するブレンドシェイプのラベルリスト
clothing_avatar_data: 衣装アバターデータ
base_avatar_data: ベースアバターデータ
clothing_armature: 衣装のアーマチュアオブジェクト
cloth_metadata: クロスメタデータ(オプション)
"""
# オブジェクト名を保存
original_name = obj.name
# 素体メッシュを取得
base_obj = bpy.data.objects.get("Body.BaseAvatar")
if not base_obj:
raise Exception("Base avatar mesh (Body.BaseAvatar) not found")
calculate_distance_based_weights(
source_obj_name=original_name,
target_obj_name=base_obj.name,
vertex_group_name="DistanceWeight",
min_distance=0.0,
max_distance=0.1
)
# 連結成分を分離(アーマチュア設定等も保持)
separated_objects, non_separated_objects = separate_and_combine_components(obj, clothing_armature, clustering=True)
# 分離するコンポーネントがない場合は通常の処理を行う
if not separated_objects or (cloth_metadata and obj.name in cloth_metadata):
if cloth_metadata and obj.name in cloth_metadata:
subdivision = False
apply_symmetric_field_delta(obj, field_data_path, blend_shape_labels, clothing_avatar_data, base_avatar_data, subdivision, skip_blend_shape_generation=skip_blend_shape_generation, config_data=config_data)
for sep_obj in non_separated_objects:
if sep_obj == obj:
continue # 元のオブジェクト自体はスキップ
bpy.data.objects.remove(sep_obj, do_unlink=True)
for sep_obj in separated_objects:
if sep_obj == obj:
continue # 元のオブジェクト自体はスキップ
bpy.data.objects.remove(sep_obj, do_unlink=True)
return
# 進捗を報告
print(f"Processing {original_name}: {len(separated_objects)} separated, {len(non_separated_objects)} non-separated")
bpy.context.view_layer.objects.active = obj
# 分離しないコンポーネントを記録するリスト
do_not_separate = []
# 処理対象のオブジェクトリスト
processed_objects = []
# 分離されたオブジェクトに剛体変換処理を適用
for sep_obj in separated_objects:
bpy.context.view_layer.objects.active = sep_obj
_, config_blend_shape_labels = apply_field_delta_with_rigid_transform(sep_obj, field_data_path, blend_shape_labels, base_avatar_data, clothing_avatar_data, "RigidTransformed", config_data=None)
# base_avatar_dataのblendShapeFieldsを処理するための準備
if not skip_blend_shape_generation:
process_blendshape_fields_with_rigid_transform(sep_obj, field_data_path, base_avatar_data, clothing_avatar_data, config_blend_shape_labels, influence_range=1.0, config_data=config_data)
# OBBを計算
obb = calculate_obb_from_object(sep_obj)
print(f"Component {sep_obj.name} OBB: \n {obb}")
# 素体メッシュとOBBの交差をチェック
if check_mesh_obb_intersection(base_obj, obb):
print(f"Component {sep_obj.name} intersects with base mesh, will not be separated")
do_not_separate.append(sep_obj.name)
processed_objects.append(sep_obj)
bpy.context.view_layer.objects.active = obj
# 分離されたオブジェクトを削除
for sep_obj in separated_objects:
print(f"Removing {sep_obj.name}")
bpy.data.objects.remove(sep_obj, do_unlink=True)
# 分離されていないオブジェクトを削除
for sep_obj in non_separated_objects:
print(f"Removing {sep_obj.name}")
bpy.data.objects.remove(sep_obj, do_unlink=True)
# 分離しないコンポーネントのリストを使用して再度分離を行う
separated_objects, non_separated_objects = separate_and_combine_components(obj, clothing_armature, do_not_separate, clustering=True)
# 処理対象のオブジェクトリストをリセット
processed_objects = []
# 分離されたオブジェクトに剛体変換処理を適用
for sep_obj in separated_objects:
bpy.context.view_layer.objects.active = sep_obj
_, config_blend_shape_labels = apply_field_delta_with_rigid_transform(sep_obj, field_data_path, blend_shape_labels, base_avatar_data, clothing_avatar_data, "RigidTransformed", config_data=config_data)
# base_avatar_dataのblendShapeFieldsを処理するための準備
if not skip_blend_shape_generation:
process_blendshape_fields_with_rigid_transform(sep_obj, field_data_path, base_avatar_data, clothing_avatar_data, config_blend_shape_labels, influence_range=1.0, config_data=config_data)
processed_objects.append(sep_obj)
# 分離されなかったオブジェクトに通常の変形処理を適用
for non_sep_obj in non_separated_objects:
if non_sep_obj is None:
continue
if cloth_metadata and non_sep_obj.name in cloth_metadata:
subdivision = False
bpy.context.view_layer.objects.active = non_sep_obj
apply_symmetric_field_delta(non_sep_obj, field_data_path, blend_shape_labels, clothing_avatar_data, base_avatar_data, subdivision, skip_blend_shape_generation=skip_blend_shape_generation, config_data=config_data)
processed_objects.append(non_sep_obj)
# 元のオブジェクトのシェイプキー情報を保存
original_shapekeys = {}
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
original_shapekeys[key_block.name] = key_block.value
# 各処理済みオブジェクトの面をマテリアル順にソート
for proc_obj in processed_objects:
if proc_obj is None:
continue
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
proc_obj.select_set(True)
bpy.context.view_layer.objects.active = proc_obj
# 編集モードに入る
bpy.ops.object.mode_set(mode='EDIT')
# 面をマテリアル順にソート
bpy.ops.mesh.sort_elements(type='MATERIAL', elements={'FACE'})
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# 元のオブジェクトの頂点を削除するために編集モードに入る
bpy.ops.object.select_all(action='DESELECT')
obj.select_set(True)
bpy.context.view_layer.objects.active = obj
bpy.ops.object.mode_set(mode='EDIT')
# すべての頂点を選択して削除
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.delete(type='VERT')
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# 各処理済みオブジェクトを順番に結合
for proc_obj in processed_objects:
if proc_obj == obj:
continue # 元のオブジェクト自体はスキップ
# 選択を設定
bpy.ops.object.select_all(action='DESELECT')
proc_obj.select_set(True)
obj.select_set(True)
bpy.context.view_layer.objects.active = obj # 元のオブジェクトをアクティブに
# 結合操作
bpy.ops.object.join()
# 元のオブジェクト名を復元(結合操作で変わる可能性があるため)
obj.name = original_name
# シェイプキーの値を復元
if obj.data.shape_keys:
for key_name, value in original_shapekeys.items():
if key_name in obj.data.shape_keys.key_blocks:
obj.data.shape_keys.key_blocks[key_name].value = value
# 元のアクティブオブジェクトを復元
bpy.context.view_layer.objects.active = obj
def get_deformation_bones(armature_obj: bpy.types.Object, avatar_data: dict) -> list:
"""
アバターデータを参照し、HumanoidボーンとAuxiliaryボーン以外のボーンを取得
Parameters:
armature_obj: アーマチュアオブジェクト
avatar_data: アバターデータ
Returns:
変形対象のボーン名のリスト
"""
# HumanoidボーンとAuxiliaryボーンのセットを作成
excluded_bones = set()
# Humanoidボーンを追加
for bone_map in avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
excluded_bones.add(bone_map["boneName"])
# 補助ボーンを追加
for aux_set in avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set.get("auxiliaryBones", []):
excluded_bones.add(aux_bone)
# 除外ボーン以外のすべてのボーンを取得
deform_bones = []
for bone in armature_obj.data.bones:
if bone.name not in excluded_bones:
deform_bones.append(bone.name)
return deform_bones
def apply_bone_field_delta(armature_obj: bpy.types.Object, field_data_path: str, avatar_data: dict) -> None:
"""
ボーンにDeformation Fieldを適用
Parameters:
armature_obj: アーマチュアオブジェクト
field_data_path: Deformation Fieldデータのパス
avatar_data: アバターデータ
"""
# データの読み込み
field_info = get_deformation_field_multi_step(field_data_path)
all_field_points = field_info['all_field_points']
all_delta_positions = field_info['all_delta_positions']
all_field_weights = field_info['field_weights']
field_matrix = field_info['world_matrix']
field_matrix_inv = field_info['world_matrix_inv']
k_neighbors = field_info['kdtree_query_k']
# 変形対象のボーンを取得
deform_bones = get_deformation_bones(armature_obj, avatar_data)
bpy.ops.object.mode_set(mode='OBJECT')
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
# アクティブオブジェクトを設定
armature_obj.select_set(True)
bpy.context.view_layer.objects.active = armature_obj
# ------------------------------------------------------------------
# 【追加処理:処理前の親子Head位置の記録】
# deform_bones内で「子が1つのみ」のボーンについて、親ボーンとその子ボーンの
# ワールド空間でのHead位置を記録しておく。
# ------------------------------------------------------------------
original_heads = {}
for bone in armature_obj.pose.bones:
if bone.name in deform_bones and len(bone.children) == 1:
child = bone.children[0]
parent_head_world = armature_obj.matrix_world @ (bone.matrix @ Vector((0, 0, 0)))
child_head_world = armature_obj.matrix_world @ (child.matrix @ Vector((0, 0, 0)))
# コピーして記録(後で参照するため)
original_heads[bone.name] = (parent_head_world.copy(), child_head_world.copy())
def process_bone_hierarchy(bone_name, parent_world_displacement, kdtree, delta_positions):
"""ボーン階層を再帰的に処理"""
bone = armature_obj.pose.bones[bone_name]
ret_displacement = parent_world_displacement
if bone_name in deform_bones:
base_matrix = armature_obj.data.bones[bone.name].matrix_local
current_world_matrix = armature_obj.matrix_world @ (bone.matrix @ base_matrix.inverted())
# ヘッドの位置を取得
head_world = (armature_obj.matrix_world @ bone.matrix @ Vector((0, 0, 0))) - parent_world_displacement
# ヘッドのフィールド空間での座標を計算
head_field = field_matrix_inv @ head_world
# ヘッドの最近接点の検索
head_distances, head_indices = kdtree.query(head_field, k=k_neighbors)
# ヘッドの変位を計算
weights = 1.0 / (head_distances + 0.0001)
weights /= weights.sum()
deltas = delta_positions[head_indices]
head_displacement = (deltas * weights[:, np.newaxis]).sum(axis=0)
# ワールド空間での変位を計算
world_displacement = (field_matrix.to_3x3() @ Vector(head_displacement)) - parent_world_displacement
new_matrix = Matrix.Translation(world_displacement)
combined_matrix = new_matrix @ current_world_matrix
bone.matrix = armature_obj.matrix_world.inverted() @ combined_matrix @ base_matrix
ret_displacement = world_displacement + parent_world_displacement
# 子ボーンを処理
for child in bone.children:
process_bone_hierarchy(child.name, ret_displacement, kdtree, delta_positions)
# 各ステップの変位を累積的に適用
num_steps = len(all_field_points)
for step in range(num_steps):
field_points = all_field_points[step]
delta_positions = all_delta_positions[step]
# KDTreeを使用して近傍点を検索(各ステップで新しいKDTreeを構築)
kdtree = cKDTree(field_points)
# ルートボーンから処理を開始
root_displacement = Vector((0, 0, 0))
root_bones = [bone.name for bone in armature_obj.pose.bones if not bone.parent]
for root_bone in root_bones:
process_bone_hierarchy(root_bone, root_displacement, kdtree, delta_positions)
bpy.context.view_layer.update()
# ------------------------------------------------------------------
# 【追加処理:回転補正の適用】
# 対象のdeform_bone(子が1つのみ)について、処理前と処理後の
# 親子のHead間の方向ベクトルの変化から回転差分を求め、その回転を
# 親ボーンに適用するとともに、子ボーンにはその影響を打ち消す補正をかける。
# ------------------------------------------------------------------
# for parent_name, (old_parent_head, old_child_head) in original_heads.items():
# parent_bone = armature_obj.pose.bones.get(parent_name)
# if not parent_bone or len(parent_bone.children) != 1:
# continue
# child_bone = parent_bone.children[0]
# # 【処理後】の親・子のHead位置を計算(ワールド座標)
# new_parent_head = armature_obj.matrix_world @ (parent_bone.matrix @ Vector((0, 0, 0)))
# new_child_head = armature_obj.matrix_world @ (child_bone.matrix @ Vector((0, 0, 0)))
# # 処理前と処理後の方向ベクトルを計算(子Head - 親Head)
# old_dir = old_child_head - old_parent_head
# new_dir = new_child_head - new_parent_head
# # もしどちらかのベクトルがゼロ長の場合はスキップ
# if old_dir.length == 0.001 or new_dir.length == 0.001:
# continue
# old_dir.normalize()
# new_dir.normalize()
# # 「old_dir」から「new_dir」へ回転させる回転差分を求める
# rot_diff = old_dir.rotation_difference(new_dir)
# # 親ボーンに対して、親のHeadを中心にrot_diffを適用する
# parent_world_matrix = armature_obj.matrix_world @ parent_bone.matrix
# T = Matrix.Translation(new_parent_head)
# T_inv = Matrix.Translation(-new_parent_head)
# rot_matrix = rot_diff.to_matrix().to_4x4()
# R = T @ rot_matrix @ T_inv
# new_parent_world_matrix = R @ parent_world_matrix
# parent_bone.matrix = armature_obj.matrix_world.inverted() @ new_parent_world_matrix
# # 子ボーンには、親の回転変化の影響が及ばないよう、逆の補正を適用する
# child_world_matrix = armature_obj.matrix_world @ child_bone.matrix
# compensation = T @ rot_matrix.inverted() @ T_inv
# new_child_world_matrix = compensation @ child_world_matrix
# child_bone.matrix = armature_obj.matrix_world.inverted() @ new_child_world_matrix
# bpy.context.view_layer.update()
bpy.context.view_layer.update()
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
def find_connected_clusters(bm, vertex_indices):
"""
エッジでつながった頂点のクラスターを見つける
Args:
bm: bmeshオブジェクト
vertex_indices: 分析対象の頂点インデックスのセット
Returns:
list: 各クラスターの頂点インデックスリストのリスト
"""
# 隣接リストを作成
adjacency = defaultdict(set)
for edge in bm.edges:
v1, v2 = edge.verts[0].index, edge.verts[1].index
if v1 in vertex_indices and v2 in vertex_indices:
adjacency[v1].add(v2)
adjacency[v2].add(v1)
visited = set()
clusters = []
# 各未訪問頂点からBFSでクラスターを探索
for vertex_idx in vertex_indices:
if vertex_idx not in visited:
cluster = []
queue = deque([vertex_idx])
visited.add(vertex_idx)
while queue:
current = queue.popleft()
cluster.append(current)
# 隣接する未訪問頂点をキューに追加
for neighbor in adjacency[current]:
if neighbor not in visited:
visited.add(neighbor)
queue.append(neighbor)
clusters.append(cluster)
return clusters
def filter_clusters_by_x_coordinate(bm, clusters):
"""
X座標が正負両方または0を含むクラスターのみをフィルタリング
Args:
bm: bmeshオブジェクト
clusters: クラスターのリスト
Returns:
list: フィルタリングされたクラスターのリスト
"""
filtered_clusters = []
for cluster in clusters:
has_positive_x = False
has_negative_x = False
has_zero_x = False
# クラスター内の頂点のX座標をチェック
for vertex_idx in cluster:
x_coord = bm.verts[vertex_idx].co.x
if x_coord > 0.001: # 正の値(小さな誤差を考慮)
has_positive_x = True
elif x_coord < -0.001: # 負の値(小さな誤差を考慮)
has_negative_x = True
else: # ゼロ付近
has_zero_x = True
# X座標が正負両方または0を含む場合のみ保持
if (has_positive_x and has_negative_x) or has_zero_x:
filtered_clusters.append(cluster)
print(f"クラスター保持: {len(cluster)}頂点 (正:{has_positive_x}, 負:{has_negative_x}, ゼロ:{has_zero_x})")
else:
print(f"クラスター除去: {len(cluster)}頂点 (正:{has_positive_x}, 負:{has_negative_x}, ゼロ:{has_zero_x})")
return filtered_clusters
def select_vertices_by_conditions(target_object, vertex_group_name, avatar_data, radius=0.075, max_angle_degrees=45.0):
"""
指定されたメッシュオブジェクトにおいて、特定の条件を満たす頂点に対して新しい頂点グループを作成してウェイトを割り当てる
条件:
1. 指定半径内にLeftUpperLegもしくはその補助ボーンと、RightUpperLegもしくはその補助ボーン、両方の頂点ウェイトを持つ頂点がある
2. その頂点の法線と-Z方向ベクトルのなす角が指定角度以下
Args:
target_object: 処理対象のメッシュオブジェクト
vertex_group_name (str): 生成する頂点グループ名
avatar_data (dict): アバターデータ(humanoidBonesとauxiliaryBonesを含む)
radius (float): 検索半径
max_angle_degrees (float): 最大角度(度)
"""
# オブジェクトの検証
if not target_object or target_object.type != 'MESH':
print("エラー: 指定されたオブジェクトがメッシュではありません")
return
# 現在のアクティブオブジェクトと選択状態を保存
original_active = bpy.context.active_object
original_selected = bpy.context.selected_objects
original_mode = bpy.context.mode
# 対象オブジェクトをアクティブにしてEdit モードに切り替え
bpy.ops.object.select_all(action='DESELECT')
target_object.select_set(True)
bpy.context.view_layer.objects.active = target_object
# 評価後のメッシュデータを取得するためにObjectモードを維持
if bpy.context.mode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
# 評価後のメッシュデータを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_object = target_object.evaluated_get(depsgraph)
mesh_data = evaluated_object.data
# ワールドマトリックスを取得
world_matrix = evaluated_object.matrix_world
# メッシュデータの更新を確保
mesh_data.calc_loop_triangles()
mesh_data.update()
# アバターデータからボーン名を取得
left_upper_leg_bone_name = None
right_upper_leg_bone_name = None
hips_bone_name = None
# 補助ボーンの名前を取得
left_upper_leg_auxiliary_bones = []
right_upper_leg_auxiliary_bones = []
hips_auxiliary_bones = []
for bone_info in avatar_data.get('humanoidBones', []):
if bone_info['humanoidBoneName'] == 'LeftUpperLeg':
left_upper_leg_bone_name = bone_info['boneName']
elif bone_info['humanoidBoneName'] == 'RightUpperLeg':
right_upper_leg_bone_name = bone_info['boneName']
elif bone_info['humanoidBoneName'] == 'Hips':
hips_bone_name = bone_info['boneName']
# 補助ボーンを取得
for aux_set in avatar_data.get('auxiliaryBones', []):
if aux_set['humanoidBoneName'] == 'LeftUpperLeg':
left_upper_leg_auxiliary_bones = aux_set['auxiliaryBones']
elif aux_set['humanoidBoneName'] == 'RightUpperLeg':
right_upper_leg_auxiliary_bones = aux_set['auxiliaryBones']
elif aux_set['humanoidBoneName'] == 'Hips':
hips_auxiliary_bones = aux_set['auxiliaryBones']
# 頂点グループの取得
upper_leg_l_groups = []
upper_leg_r_groups = []
hips_groups = []
# メイン骨の頂点グループを取得
for vg in target_object.vertex_groups:
if left_upper_leg_bone_name and vg.name == left_upper_leg_bone_name:
upper_leg_l_groups.append(vg)
elif right_upper_leg_bone_name and vg.name == right_upper_leg_bone_name:
upper_leg_r_groups.append(vg)
elif hips_bone_name and vg.name == hips_bone_name:
hips_groups.append(vg)
# 補助ボーンの頂点グループを取得
for vg in target_object.vertex_groups:
if vg.name in left_upper_leg_auxiliary_bones:
upper_leg_l_groups.append(vg)
elif vg.name in right_upper_leg_auxiliary_bones:
upper_leg_r_groups.append(vg)
elif vg.name in hips_auxiliary_bones:
hips_groups.append(vg)
if not upper_leg_l_groups or not upper_leg_r_groups:
available_l_bones = [left_upper_leg_bone_name] + left_upper_leg_auxiliary_bones
available_r_bones = [right_upper_leg_bone_name] + right_upper_leg_auxiliary_bones
print(f"エラー: LeftUpperLeg({available_l_bones})またはRightUpperLeg({available_r_bones})の頂点グループが見つかりません")
# 元の状態に復元
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
for obj in original_selected:
obj.select_set(True)
if original_active:
bpy.context.view_layer.objects.active = original_active
return
# 対象となる頂点グループのインデックスセット
target_group_indices = set()
for group in upper_leg_l_groups + upper_leg_r_groups + hips_groups:
target_group_indices.add(group.index)
# 対象頂点グループに属する頂点のインデックスを収集(評価後のメッシュデータを使用)
target_vertex_indices = set()
for vertex_idx, vertex in enumerate(mesh_data.vertices):
for group_elem in vertex.groups:
if group_elem.group in target_group_indices and group_elem.weight > 0.001:
target_vertex_indices.add(vertex_idx)
break
# -Z方向ベクトル
neg_z_vector = Vector((0, 0, -1))
max_angle_rad = math.radians(max_angle_degrees)
# 新しい頂点グループの作成(既存の場合は削除してから作成)
if vertex_group_name in target_object.vertex_groups:
target_object.vertex_groups.remove(target_object.vertex_groups[vertex_group_name])
new_vertex_group = target_object.vertex_groups.new(name=vertex_group_name)
selected_count = 0
condition_met_vertices = []
start_time = time.time()
# 全頂点の座標をワールド座標に変換してリストに変換(評価後のメッシュデータを使用)
vertex_coords = [(world_matrix @ vert.co)[:] for vert in mesh_data.vertices]
print(f"頂点数: {len(vertex_coords)}")
# KDTreeを構築(ワールド座標で)
kdtree = cKDTree(vertex_coords)
# 対象頂点グループに属する頂点についてのみ条件をチェック(評価後のメッシュデータを使用)
for vert_idx, vert in enumerate(mesh_data.vertices):
if vert_idx not in target_vertex_indices:
continue
# この頂点が条件を満たすかチェック
should_select = False
# 現在の頂点をワールド座標に変換
world_vert_co = world_matrix @ vert.co
if world_vert_co.y < 0.0:
should_select = True
else:
# KDTreeで指定半径内の頂点インデックスを取得(ワールド座標で)
neighbor_indices = kdtree.query_ball_point(world_vert_co[:], radius)
# 自分自身を除外
neighbor_indices = [idx for idx in neighbor_indices if idx != vert_idx]
# 近傍頂点をチェック
for neighbor_idx in neighbor_indices:
has_upper_leg_l_or_aux = False
has_upper_leg_r_or_aux = False
# ウェイトチェック(評価後のメッシュデータを使用)
for group_elem in mesh_data.vertices[neighbor_idx].groups:
# LeftUpperLegまたはその補助ボーンのウェイトをチェック
for left_group in upper_leg_l_groups:
if group_elem.group == left_group.index and group_elem.weight > 0.05:
has_upper_leg_l_or_aux = True
break
# RightUpperLegまたはその補助ボーンのウェイトをチェック
for right_group in upper_leg_r_groups:
if group_elem.group == right_group.index and group_elem.weight > 0.05:
has_upper_leg_r_or_aux = True
break
# 両方見つかったら早期終了
if has_upper_leg_l_or_aux and has_upper_leg_r_or_aux:
break
# 両方のウェイト(メインまたは補助)を持つ場合
if has_upper_leg_l_or_aux and has_upper_leg_r_or_aux:
should_select = True
break
# 条件を満たす場合の法線チェックは後で実行
if should_select:
# 頂点法線を取得(評価後のメッシュデータを使用)
local_normal = vert.normal
# 法線をワールド空間に変換(回転と拡縮のみを適用、平行移動は無視)
world_normal = (world_matrix.to_3x3().inverted().transposed() @ local_normal).normalized()
# 法線と-Z方向ベクトルのなす角を計算(ワールド座標系で)
dot_product = world_normal.dot(neg_z_vector)
# ドット積を[-1, 1]範囲にクランプ
dot_product = max(-1.0, min(1.0, dot_product))
angle = math.acos(abs(dot_product))
# 角度が指定値以下の場合のみ追加
if angle <= max_angle_rad:
condition_met_vertices.append(vert_idx)
selected_count += 1
end_time = time.time()
print(f"KDTree検索完了: {end_time - start_time:.3f}秒")
# クラスター分析とフィルタリング
if condition_met_vertices:
print(f"クラスター分析開始: {len(condition_met_vertices)}個の頂点")
# 評価後メッシュからbmeshを作成(クラスター分析用)
temp_bm = bmesh.new()
temp_bm.from_mesh(mesh_data)
temp_bm.verts.ensure_lookup_table()
temp_bm.edges.ensure_lookup_table()
temp_bm.faces.ensure_lookup_table()
# クラスターに分割
clusters = find_connected_clusters(temp_bm, set(condition_met_vertices))
print(f"発見されたクラスター数: {len(clusters)}")
# X座標によるフィルタリング
filtered_clusters = filter_clusters_by_x_coordinate(temp_bm, clusters)
print(f"フィルタリング後のクラスター数: {len(filtered_clusters)}")
# 一時的なbmeshを解放
temp_bm.free()
# フィルタリングされたクラスターから頂点リストを再構築
final_vertices = []
for cluster in filtered_clusters:
final_vertices.extend(cluster)
condition_met_vertices = final_vertices
selected_count = len(condition_met_vertices)
print(f"最終的な頂点数: {selected_count}")
end_time = time.time()
print(f"KDTree検索完了: {end_time - start_time:.3f}秒")
# 条件を満たす頂点に重み1を割り当て、それ以外に重み0を割り当て
for vertex_idx in range(len(target_object.data.vertices)):
if vertex_idx in condition_met_vertices:
new_vertex_group.add([vertex_idx], 1.0, 'REPLACE')
else:
new_vertex_group.add([vertex_idx], 0.0, 'REPLACE')
# 元の状態に復元
bpy.ops.object.select_all(action='DESELECT')
for obj in original_selected:
obj.select_set(True)
if original_active:
bpy.context.view_layer.objects.active = original_active
if original_mode.startswith('EDIT'):
bpy.ops.object.mode_set(mode='EDIT')
print(f"作成された頂点グループ: {vertex_group_name}")
print(f"条件を満たした頂点数: {selected_count}")
print(f"対象頂点数: {len(target_vertex_indices)}")
print(f"検索半径: {radius}")
print(f"最大角度: {max_angle_degrees}度")
print(f"総頂点数: {len(target_object.data.vertices)}")
def transfer_weights_from_nearest_vertex(base_mesh, target_obj, vertex_group_name, angle_min=-1.0, angle_max=-1.0, normal_radius=0.0):
"""
base_meshの指定された頂点グループのウェイトをtarget_objに転写する
target_objの各頂点において、最も近いbase_meshの頂点を取得し、そのウェイト値を設定する
法線のなす角に基づいてウェイトを調整する
Args:
base_mesh: ベースメッシュオブジェクト(ウェイトのソース)
target_obj: ターゲットメッシュオブジェクト(ウェイトの転写先)
vertex_group_name (str): 転写する頂点グループ名
angle_min (float): 角度の最小値、この値以下では ウェイト係数0.0(度単位)
angle_max (float): 角度の最大値、この値以上では ウェイト係数1.0(度単位)
normal_radius (float): 法線の加重平均を計算する際に考慮する球体の半径
"""
# オブジェクトの検証
if not base_mesh or base_mesh.type != 'MESH':
print("エラー: ベースメッシュが指定されていないか、メッシュではありません")
return
if not target_obj or target_obj.type != 'MESH':
print("エラー: ターゲットメッシュが指定されていないか、メッシュではありません")
return
# ベースメッシュの頂点グループを取得
base_vertex_group = None
for vg in base_mesh.vertex_groups:
if vg.name == vertex_group_name:
base_vertex_group = vg
break
if not base_vertex_group:
print(f"エラー: ベースメッシュに頂点グループ '{vertex_group_name}' が見つかりません")
return
print(f"ベースメッシュ '{base_mesh.name}' からターゲットメッシュ '{target_obj.name}' へ頂点グループ '{vertex_group_name}' のウェイトを転写中...")
# モードを確認してオブジェクトモードに切り替え
original_mode = bpy.context.mode
if original_mode != 'OBJECT':
bpy.ops.object.mode_set(mode='OBJECT')
angle_min_rad = math.radians(angle_min)
angle_max_rad = math.radians(angle_max)
# BVHツリーを作成(高速な最近傍点検索のため)
# モディファイア適用後のターゲットメッシュを取得
body_bm = get_evaluated_mesh(base_mesh)
body_bm.faces.ensure_lookup_table()
# ターゲットメッシュのBVHツリーを作成
bvh_time_start = time.time()
bvh_tree = BVHTree.FromBMesh(body_bm)
bvh_time = time.time() - bvh_time_start
print(f" BVHツリー作成: {bvh_time:.2f}秒")
# 頂点グループがまだ存在しない場合は作成
if vertex_group_name not in target_obj.vertex_groups:
target_obj.vertex_groups.new(name=vertex_group_name)
target_vertex_group = target_obj.vertex_groups[vertex_group_name]
# モディファイア適用後のソースメッシュを取得
cloth_bm = get_evaluated_mesh(target_obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
# トランスフォームマトリックスをキャッシュ(繰り返しの計算を避けるため)
body_normal_matrix = base_mesh.matrix_world.inverted().transposed()
cloth_normal_matrix = target_obj.matrix_world.inverted().transposed()
# 修正した法線を格納する辞書
adjusted_normals_time_start = time.time()
adjusted_normals = {}
# 衣装メッシュの各頂点の法線処理(逆転の必要があるかチェック)
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置と法線
cloth_vert_world = vertex.co
original_normal_world = (cloth_normal_matrix @ Vector((vertex.normal[0], vertex.normal[1], vertex.normal[2], 0))).xyz.normalized()
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面の法線をワールド座標系に変換
face_normal_world = (body_normal_matrix @ Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 内積が負の場合、法線を反転
dot_product = original_normal_world.dot(face_normal_world)
if dot_product < 0:
adjusted_normal = -original_normal_world
else:
adjusted_normal = original_normal_world
# 調整済み法線を辞書に保存
adjusted_normals[i] = adjusted_normal
else:
# 最近傍点が見つからない場合は元の法線を使用
adjusted_normals[i] = original_normal_world
adjusted_normals_time = time.time() - adjusted_normals_time_start
print(f" 法線調整: {adjusted_normals_time:.2f}秒")
# 面の中心点と面積を事前計算してキャッシュ
face_cache_time_start = time.time()
face_centers = []
face_areas = {}
face_adjusted_normals = {}
face_indices = []
for face in cloth_bm.faces:
# 面の中心点を計算
center = Vector((0, 0, 0))
for v in face.verts:
center += v.co
center /= len(face.verts)
face_centers.append(center)
face_indices.append(face.index)
# 面積を計算
face_areas[face.index] = face.calc_area()
# 面の調整済み法線を計算
face_normal = Vector((0, 0, 0))
for v in face.verts:
face_normal += adjusted_normals[v.index]
face_adjusted_normals[face.index] = face_normal.normalized()
face_cache_time = time.time() - face_cache_time_start
print(f" 面キャッシュ作成: {face_cache_time:.2f}秒")
# 衣装メッシュの面に対してKDTreeを構築
kdtree_time_start = time.time()
# kd.balance()
kd = cKDTree(face_centers)
kdtree_time = time.time() - kdtree_time_start
print(f" KDTree構築: {kdtree_time:.2f}秒")
# 各頂点の法線を近傍面の法線の加重平均で更新
normal_avg_time_start = time.time()
for i, vertex in enumerate(cloth_bm.verts):
# 一定の半径内の面を検索
co = vertex.co
weighted_normal = Vector((0, 0, 0))
total_weight = 0
# KDTreeを使用して近傍の面を効率的に検索
for index in kd.query_ball_point(co, normal_radius):
# 距離に応じた重みを計算(距離が近いほど影響が大きい)
face_index = face_indices[index]
area = face_areas[face_index]
dist = (co - face_centers[index]).length
# 距離に基づく減衰係数
distance_factor = 1.0 - (dist / normal_radius) if dist < normal_radius else 0.0
weight = area * distance_factor
weighted_normal += face_adjusted_normals[face_index] * weight
total_weight += weight
# 重みの合計が0でない場合は正規化
if total_weight > 0:
weighted_normal /= total_weight
weighted_normal.normalize()
# 調整済み法線を更新
adjusted_normals[i] = weighted_normal
normal_avg_time = time.time() - normal_avg_time_start
print(f" 法線加重平均計算: {normal_avg_time:.2f}秒")
# 衣装メッシュの各頂点に対して処理
weight_calc_time_start = time.time()
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置
cloth_vert_world = vertex.co
# 調整済みの法線を使用
cloth_normal_world = adjusted_normals[i]
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
distance = float('inf') # 初期値として無限大を設定
# 頂点ウェイトの初期値
weight = 0.0
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面上の最近接点を計算
closest_point_on_face = mathutils.geometry.closest_point_on_tri(
cloth_vert_world,
face.verts[0].co,
face.verts[1].co,
face.verts[2].co
)
# base_mesh面上の最近接点の{vertex_group_name}頂点グループのウェイトを線形補完によって計算する
# 面の3つの頂点を取得
v0, v1, v2 = face.verts[0], face.verts[1], face.verts[2]
# 各頂点のウェイトを取得
vg_index = base_vertex_group.index
w0 = 0.0
w1 = 0.0
w2 = 0.0
# base_meshの元のメッシュデータから頂点ウェイトを取得
base_mesh_data = base_mesh.data
try:
for group in base_mesh_data.vertices[v0.index].groups:
if group.group == vg_index:
w0 = group.weight
break
except (IndexError, KeyError):
pass
try:
for group in base_mesh_data.vertices[v1.index].groups:
if group.group == vg_index:
w1 = group.weight
break
except (IndexError, KeyError):
pass
try:
for group in base_mesh_data.vertices[v2.index].groups:
if group.group == vg_index:
w2 = group.weight
break
except (IndexError, KeyError):
pass
# 重心座標を計算
# 三角形の3つの頂点と面上の点から重心座標を求める
p0 = v0.co
p1 = v1.co
p2 = v2.co
p = closest_point_on_face
v0v1 = p1 - p0
v0v2 = p2 - p0
v0p = p - p0
d00 = v0v1.dot(v0v1)
d01 = v0v1.dot(v0v2)
d11 = v0v2.dot(v0v2)
d20 = v0p.dot(v0v1)
d21 = v0p.dot(v0v2)
denom = d00 * d11 - d01 * d01
if abs(denom) > 1e-8:
# 重心座標 (u, v, w) を計算
v = (d11 * d20 - d01 * d21) / denom
w = (d00 * d21 - d01 * d20) / denom
u = 1.0 - v - w
# 重心座標を使ってウェイトを線形補間
weight = u * w0 + v * w1 + w * w2
# ウェイトを0~1の範囲にクランプ
weight = max(0.0, min(1.0, weight))
else:
# 退化した三角形の場合は最も近い頂点のウェイトを使用
dist0 = (p - p0).length
dist1 = (p - p1).length
dist2 = (p - p2).length
if dist0 <= dist1 and dist0 <= dist2:
weight = w0
elif dist1 <= dist2:
weight = w1
else:
weight = w2
# 面の法線をワールド座標系に変換
face_normal_world = (body_normal_matrix @ Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 距離を計算
distance = (cloth_vert_world - closest_point_on_face).length
# 最近傍点と法線を設定
nearest_point = closest_point_on_face
nearest_normal = face_normal_world
else:
# 最近傍点が見つからない場合は初期値をNoneに設定
nearest_point = None
nearest_normal = None
if nearest_point:
# 法線角度に基づくウェイト(線形補間)
angle_weight = 0.0
if nearest_normal:
# 法線の角度を計算
angle = math.acos(min(1.0, max(-1.0, cloth_normal_world.dot(nearest_normal))))
# 90度以上の場合は法線を反転して再計算
if angle > math.pi / 2:
inverted_normal = -nearest_normal
angle = math.acos(min(1.0, max(-1.0, cloth_normal_world.dot(inverted_normal))))
# 角度の線形補間
if angle <= angle_min_rad:
angle_weight = 0.0
elif angle >= angle_max_rad:
angle_weight = 1.0
else:
# 線形補間
angle_weight = (angle - angle_min_rad) / (angle_max_rad - angle_min_rad)
weight = weight * angle_weight
# 頂点グループにウェイトを設定
target_vertex_group.add([i], weight, 'REPLACE')
weight_calc_time = time.time() - weight_calc_time_start
print(f" ウェイト計算: {weight_calc_time:.2f}秒")
# 元のモードに戻す
if original_mode != 'OBJECT':
if original_mode.startswith('EDIT'):
bpy.ops.object.mode_set(mode='EDIT')
def barycentric_coords_from_point(p, a, b, c):
"""
三角形上の点pの重心座標を計算する
Args:
p: 点の座標(Vector)
a, b, c: 三角形の頂点座標(Vector)
Returns:
(u, v, w): 重心座標のタプル(u + v + w = 1)
"""
v0 = b - a
v1 = c - a
v2 = p - a
d00 = v0.dot(v0)
d01 = v0.dot(v1)
d11 = v1.dot(v1)
d20 = v2.dot(v0)
d21 = v2.dot(v1)
denom = d00 * d11 - d01 * d01
if abs(denom) < 1e-10:
# 退化した三角形の場合は最も近い頂点のウェイトを1にする
dist_a = (p - a).length
dist_b = (p - b).length
dist_c = (p - c).length
min_dist = min(dist_a, dist_b, dist_c)
if min_dist == dist_a:
return (1.0, 0.0, 0.0)
elif min_dist == dist_b:
return (0.0, 1.0, 0.0)
else:
return (0.0, 0.0, 1.0)
v = (d11 * d20 - d01 * d21) / denom
w = (d00 * d21 - d01 * d20) / denom
u = 1.0 - v - w
return (u, v, w)
def find_vertices_near_faces(base_mesh, target_mesh, vertex_group_name, max_distance=1.0, max_angle_degrees=None, use_all_faces=False, smooth_repeat=3):
"""
ベースメッシュの特定の頂点グループに属する面から指定距離内にあるターゲットメッシュの頂点を見つける、法線の方向を考慮する
Args:
base_mesh: ベースメッシュオブジェクト(面を構成する頂点が属する頂点グループを持つ)
target_mesh: ターゲットメッシュオブジェクト(検索対象の頂点を持つ)
vertex_group_name (str): 検索対象の頂点グループ名(両メッシュで共通)
max_distance (float): 最大距離
max_angle_degrees (float): 最大角度 (度)、Noneの場合は法線の方向を考慮しない
use_all_faces (bool): すべての面を使用するかどうか
smooth_repeat (int): スムージングの繰り返し回数
"""
# オブジェクトの検証
if not base_mesh or base_mesh.type != 'MESH':
print("エラー: ベースメッシュが指定されていないか、メッシュではありません")
return
if not target_mesh or target_mesh.type != 'MESH':
print("エラー: ターゲットメッシュが指定されていないか、メッシュではありません")
return
# ベースメッシュの頂点グループを取得
base_vertex_group = None
for vg in base_mesh.vertex_groups:
if vg.name == vertex_group_name:
base_vertex_group = vg
break
if not base_vertex_group:
print(f"エラー: ベースメッシュに頂点グループ '{vertex_group_name}' が見つかりません")
return
# 現在のアクティブオブジェクトと選択状態を保存
original_active = bpy.context.active_object
original_selected = bpy.context.selected_objects
original_mode = bpy.context.mode
print(f"ベースメッシュ '{base_mesh.name}' の頂点グループ '{vertex_group_name}' に属する面を分析中...")
# ベースメッシュから対象となる面を抽出
bpy.ops.object.select_all(action='DESELECT')
base_mesh.select_set(True)
bpy.context.view_layer.objects.active = base_mesh
bpy.ops.object.mode_set(mode='OBJECT')
# ベースメッシュを複製して三角面化
print("ベースメッシュを複製して三角面化中...")
bpy.ops.object.duplicate()
temp_base_mesh = bpy.context.active_object
temp_base_mesh.name = f"{base_mesh.name}_temp_triangulated"
# 複製したメッシュを三角面化
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
bpy.ops.mesh.quads_convert_to_tris(quad_method='BEAUTY', ngon_method='BEAUTY')
bpy.ops.object.mode_set(mode='OBJECT')
# 評価後のメッシュデータを取得(三角面化されたベースメッシュ)
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_base_mesh = temp_base_mesh.evaluated_get(depsgraph)
base_mesh_data = evaluated_base_mesh.data
base_world_matrix = evaluated_base_mesh.matrix_world
# 元のベースメッシュの頂点グループインデックスを取得
base_vertex_group_idx = base_vertex_group.index
# 複製メッシュでも同じ頂点グループが存在することを確認
temp_base_vertex_group = None
for vg in temp_base_mesh.vertex_groups:
if vg.name == vertex_group_name:
temp_base_vertex_group = vg
break
if not temp_base_vertex_group:
print(f"エラー: 複製メッシュに頂点グループ '{vertex_group_name}' が見つかりません")
# 一時メッシュを削除
bpy.data.objects.remove(temp_base_mesh, do_unlink=True)
return
# ベースメッシュの頂点グループに属する頂点を取得(評価後のメッシュデータを使用)
base_vertices_in_group = set()
for vertex_idx, vertex in enumerate(base_mesh_data.vertices):
for group_elem in vertex.groups:
if group_elem.group == temp_base_vertex_group.index and group_elem.weight > 0.001:
base_vertices_in_group.add(vertex_idx)
break
print(f"頂点グループに属する頂点数: {len(base_vertices_in_group)}")
# 構成する頂点がすべて頂点グループに属する面を見つける(評価後のメッシュデータを使用)
target_face_indices = []
if use_all_faces:
target_face_indices = [face.index for face in base_mesh_data.polygons]
else:
for face in base_mesh_data.polygons:
if all(vertex_idx in base_vertices_in_group for vertex_idx in face.vertices):
target_face_indices.append(face.index)
print(f"条件を満たす面数: {len(target_face_indices)} (すべて三角形)")
if not target_face_indices:
print("警告: 条件を満たす面が見つかりません")
# 一時メッシュを削除
bpy.data.objects.remove(temp_base_mesh, do_unlink=True)
# 元の状態に復元
bpy.ops.object.select_all(action='DESELECT')
for obj in original_selected:
obj.select_set(True)
if original_active:
bpy.context.view_layer.objects.active = original_active
return
# ターゲットメッシュの頂点グループを作成または取得
target_vertex_group = None
if vertex_group_name in target_mesh.vertex_groups:
target_mesh.vertex_groups.remove(target_mesh.vertex_groups[vertex_group_name])
target_vertex_group = target_mesh.vertex_groups.new(name=vertex_group_name)
# ターゲットメッシュの各頂点について距離をチェック
found_vertices = []
# ターゲットメッシュの評価後データも取得
evaluated_target_mesh = target_mesh.evaluated_get(depsgraph)
target_mesh_data = evaluated_target_mesh.data
target_world_matrix = evaluated_target_mesh.matrix_world
target_normal_matrix = evaluated_target_mesh.matrix_world.inverted().transposed()
# BVHTreeを使った高速化
print("BVHTreeを使用して高速検索を実行中...")
import time
start_time = time.time()
# 三角面化されたベースメッシュからBVHTreeを構築
temp_bm = bmesh.new()
temp_bm.from_mesh(base_mesh_data)
temp_bm.faces.ensure_lookup_table()
temp_bm.verts.ensure_lookup_table()
# 対象面の頂点座標と面インデックスを準備
vertices = []
faces = []
# すべての頂点を追加(ワールド座標)
for vert in temp_bm.verts:
world_vert = base_world_matrix @ vert.co
vertices.append(world_vert)
# 対象面のみを追加(すべて三角形)
for face_idx in target_face_indices:
face = temp_bm.faces[face_idx]
face_indices = [v.index for v in face.verts]
faces.append(face_indices)
# BVHTreeを構築
if faces: # 面が存在する場合のみ
bvh = BVHTree.FromPolygons(vertices, faces)
# 各頂点の補間ウェイトを保存する辞書
vertex_interpolated_weights = {}
for vertex_idx, vertex in enumerate(target_mesh_data.vertices):
# 頂点のワールド座標(評価後のターゲットメッシュデータを使用)
world_vertex_pos = target_world_matrix @ vertex.co
nearest_point, normal, face_idx, distance = bvh.find_nearest(world_vertex_pos)
if max_angle_degrees is not None:
v = (world_vertex_pos - nearest_point).normalized()
angle = math.degrees(math.acos(min(1.0, max(-1.0, v.dot(normal)))))
if angle > max_angle_degrees:
vertex_interpolated_weights[vertex_idx] = 0.0
continue
# 最も近い面までの距離を取得
if nearest_point is not None and distance <= max_distance and face_idx is not None:
found_vertices.append(vertex_idx)
# 面を構成する頂点のインデックスを取得(すべて三角形)
face_vertex_indices = faces[face_idx]
# 面を構成する頂点のワールド座標を取得
face_vertices = [vertices[vi] for vi in face_vertex_indices]
# 三角形の重心座標を計算
bary_coords = barycentric_coords_from_point(nearest_point, face_vertices[0], face_vertices[1], face_vertices[2])
# 各頂点のベースメッシュ頂点グループでのウェイトを取得
weights = []
for vi in face_vertex_indices:
base_vert = base_mesh_data.vertices[vi]
vert_weight = 0.0
for group_elem in base_vert.groups:
if group_elem.group == temp_base_vertex_group.index:
vert_weight = group_elem.weight
break
weights.append(vert_weight)
# 重心座標で補間
interpolated_weight = (bary_coords[0] * weights[0] +
bary_coords[1] * weights[1] +
bary_coords[2] * weights[2])
vertex_interpolated_weights[vertex_idx] = max(0.0, min(1.0, interpolated_weight))
else:
vertex_interpolated_weights[vertex_idx] = 0.0
else:
print("警告: 対象となる面が見つかりません")
# 一時メッシュを削除
bpy.data.objects.remove(temp_base_mesh, do_unlink=True)
return
# 一時的なbmeshを解放
temp_bm.free()
end_time = time.time()
print(f"BVHTree検索完了: {end_time - start_time:.3f}秒")
# ターゲットメッシュの頂点グループにウェイトを設定
for vertex_idx in range(len(target_mesh_data.vertices)):
weight = vertex_interpolated_weights.get(vertex_idx, 0.0)
target_vertex_group.add([vertex_idx], weight, 'REPLACE')
bpy.ops.object.select_all(action='DESELECT')
target_mesh.select_set(True)
bpy.context.view_layer.objects.active = target_mesh
# Editモードに切り替えて全頂点を選択
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_all(action='SELECT')
# グループを選択
for i, group in enumerate(target_mesh.vertex_groups):
target_mesh.vertex_groups.active_index = i
if group.name == vertex_group_name:
break
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
# スムージングを適用
if smooth_repeat > 0:
bpy.ops.object.vertex_group_smooth(factor=0.5, repeat=smooth_repeat, expand=0.5)
bpy.ops.object.mode_set(mode='OBJECT')
# 一時的に作成した三角面化メッシュを削除
print(f"一時メッシュ '{temp_base_mesh.name}' を削除中...")
bpy.data.objects.remove(temp_base_mesh, do_unlink=True)
# 元の状態に復元
bpy.ops.object.select_all(action='DESELECT')
for obj in original_selected:
obj.select_set(True)
if original_active:
bpy.context.view_layer.objects.active = original_active
if original_mode.startswith('EDIT'):
bpy.ops.object.mode_set(mode='EDIT')
print(f"作成された頂点グループ: {vertex_group_name}")
print(f"条件を満たした頂点数: {len(found_vertices)}")
print(f"最大距離: {max_distance}")
def strip_numeric_suffix(bone_name: str) -> str:
"""
ボーン名から末尾の '.数字' パターンを削除
Parameters:
bone_name: ボーン名
Returns:
str: '.数字' が削除されたボーン名
"""
return re.sub(r'\.[\d]+$', '', bone_name)
def is_left_side_bone(bone_name: str, humanoid_name: str = None) -> bool:
"""
ボーンが左側かどうかを判定
Parameters:
bone_name: ボーン名
humanoid_name: Humanoidボーン名(オプション)
Returns:
bool: 左側のボーンの場合True
"""
# Humanoidボーン名での判定
if humanoid_name and any(k in humanoid_name for k in ["Left", "left"]):
return True
# 末尾の数字を削除
cleaned_name = strip_numeric_suffix(bone_name)
# ボーン名での判定
if any(k in cleaned_name for k in ["Left", "left"]):
return True
# 末尾での判定(スペースを含む場合も考慮)
suffixes = ["_L", ".L", " L"]
return any(cleaned_name.endswith(suffix) for suffix in suffixes)
def is_right_side_bone(bone_name: str, humanoid_name: str = None) -> bool:
"""
ボーンが右側かどうかを判定
Parameters:
bone_name: ボーン名
humanoid_name: Humanoidボーン名(オプション)
Returns:
bool: 右側のボーンの場合True
"""
# Humanoidボーン名での判定
if humanoid_name and any(k in humanoid_name for k in ["Right", "right"]):
return True
# 末尾の数字を削除
cleaned_name = strip_numeric_suffix(bone_name)
# ボーン名での判定
if any(k in cleaned_name for k in ["Right", "right"]):
return True
# 末尾での判定(スペースを含む場合も考慮)
suffixes = ["_R", ".R", " R"]
return any(cleaned_name.endswith(suffix) for suffix in suffixes)
def duplicate_mesh_with_partial_weights(base_mesh: bpy.types.Object, base_avatar_data: dict) -> tuple:
"""
素体メッシュを複製し左右の半身ウェイトを分離したものを作成
Returns: (右半身のみのメッシュ, 左半身のメッシュ)
"""
# 左右のボーンを分類
left_bones, right_bones = set(), set()
# 左右で別のグループにする脚・足・足指・胸のボーン
leg_foot_chest_bones = {
"LeftUpperLeg", "RightUpperLeg", "LeftLowerLeg", "RightLowerLeg",
"LeftFoot", "RightFoot", "LeftToes", "RightToes", "LeftBreast", "RightBreast",
"LeftFootThumbProximal", "LeftFootThumbIntermediate", "LeftFootThumbDistal",
"LeftFootIndexProximal", "LeftFootIndexIntermediate", "LeftFootIndexDistal",
"LeftFootMiddleProximal", "LeftFootMiddleIntermediate", "LeftFootMiddleDistal",
"LeftFootRingProximal", "LeftFootRingIntermediate", "LeftFootRingDistal",
"LeftFootLittleProximal", "LeftFootLittleIntermediate", "LeftFootLittleDistal",
"RightFootThumbProximal", "RightFootThumbIntermediate", "RightFootThumbDistal",
"RightFootIndexProximal", "RightFootIndexIntermediate", "RightFootIndexDistal",
"RightFootMiddleProximal", "RightFootMiddleIntermediate", "RightFootMiddleDistal",
"RightFootRingProximal", "RightFootRingIntermediate", "RightFootRingDistal",
"RightFootLittleProximal", "RightFootLittleIntermediate", "RightFootLittleDistal"
}
# 右側グループに入れる指ボーン
right_group_fingers = {
"LeftThumbProximal", "LeftThumbIntermediate", "LeftThumbDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal",
"RightThumbProximal", "RightThumbIntermediate", "RightThumbDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal"
}
# 左側グループに入れる指ボーン
left_group_fingers = {
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal"
}
# 分離しない肩・腕・手のボーン
excluded_bones = {
"LeftShoulder", "RightShoulder", "LeftUpperArm", "RightUpperArm",
"LeftLowerArm", "RightLowerArm", "LeftHand", "RightHand"
}
for bone_map in base_avatar_data.get("humanoidBones", []):
bone_name = bone_map["boneName"]
humanoid_name = bone_map["humanoidBoneName"]
if humanoid_name in excluded_bones:
# 分離しない
continue
elif humanoid_name in leg_foot_chest_bones:
# 脚・足・足指・胸は従来通り左右で分ける
if any(k in humanoid_name for k in ["Left", "left"]):
left_bones.add(bone_name)
elif any(k in humanoid_name for k in ["Right", "right"]):
right_bones.add(bone_name)
elif humanoid_name in right_group_fingers:
# 右側グループに入れる指ボーン
right_bones.add(bone_name)
elif humanoid_name in left_group_fingers:
# 左側グループに入れる指ボーン
left_bones.add(bone_name)
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_name = aux_set["humanoidBoneName"]
for aux_bone in aux_set["auxiliaryBones"]:
if humanoid_name in excluded_bones:
# 分離しない
continue
elif humanoid_name in leg_foot_chest_bones:
# 脚・足・足指・胸は従来通り左右で分ける
if is_left_side_bone(aux_bone, humanoid_name):
left_bones.add(aux_bone)
elif is_right_side_bone(aux_bone, humanoid_name):
right_bones.add(aux_bone)
elif humanoid_name in right_group_fingers:
# 右側グループに入れる指ボーン
right_bones.add(aux_bone)
elif humanoid_name in left_group_fingers:
# 左側グループに入れる指ボーン
left_bones.add(aux_bone)
# メッシュを複製 (通常版)
right_mesh = base_mesh.copy()
right_mesh.data = base_mesh.data.copy()
right_mesh.name = base_mesh.name + ".RightOnly"
bpy.context.scene.collection.objects.link(right_mesh)
left_mesh = base_mesh.copy()
left_mesh.data = base_mesh.data.copy()
left_mesh.name = base_mesh.name + ".LeftOnly"
bpy.context.scene.collection.objects.link(left_mesh)
left_base_mesh_armature_settings = store_armature_modifier_settings(left_mesh)
right_base_mesh_armature_settings = store_armature_modifier_settings(right_mesh)
apply_modifiers_keep_shapekeys_with_temp(left_mesh)
apply_modifiers_keep_shapekeys_with_temp(right_mesh)
restore_armature_modifier(left_mesh, left_base_mesh_armature_settings)
restore_armature_modifier(right_mesh, right_base_mesh_armature_settings)
set_armature_modifier_visibility(left_mesh, False, False)
set_armature_modifier_visibility(right_mesh, False, False)
print(f"left_bones: {left_bones}")
print(f"right_bones: {right_bones}")
# 通常版の処理
# 左右の頂点グループを削除
for bone_name in left_bones:
if bone_name in right_mesh.vertex_groups:
right_mesh.vertex_groups.remove(right_mesh.vertex_groups[bone_name])
for bone_name in right_bones:
if bone_name in left_mesh.vertex_groups:
left_mesh.vertex_groups.remove(left_mesh.vertex_groups[bone_name])
return right_mesh, left_mesh
def find_containing_objects(clothing_meshes, threshold=0.02):
"""
あるオブジェクトが他のオブジェクト全体を包含するペアを見つける
複数のオブジェクトに包含される場合は平均距離が最も小さいものにのみ包含される
Parameters:
clothing_meshes: チェック対象のメッシュオブジェクトのリスト
threshold: 距離の閾値
Returns:
dict: 包含するオブジェクトをキー、包含されるオブジェクトのリストを値とする辞書
"""
# 頂点間の平均距離を追跡する辞書
average_distances = {} # {(container, contained): average_distance}
# 各オブジェクトペアについてチェック
for i, obj1 in enumerate(clothing_meshes):
for j, obj2 in enumerate(clothing_meshes):
if i == j: # 同じオブジェクトはスキップ
continue
# 距離計算のため評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj1 = obj1.evaluated_get(depsgraph)
eval_mesh1 = eval_obj1.data
eval_obj2 = obj2.evaluated_get(depsgraph)
eval_mesh2 = eval_obj2.data
# BVHツリーを構築
bm1 = bmesh.new()
bm1.from_mesh(eval_mesh1)
bm1.transform(obj1.matrix_world)
bvh_tree1 = BVHTree.FromBMesh(bm1)
# すべての頂点が閾値内かどうかのフラグと距離の合計
all_within_threshold = True
total_distance = 0.0
vertex_count = 0
# 2つ目のオブジェクトの各頂点について最近接面までの距離を探索
for vert in eval_mesh2.vertices:
# 頂点のワールド座標を計算
vert_world = obj2.matrix_world @ vert.co
# 最近接点と距離を探索
nearest = bvh_tree1.find_nearest(vert_world)
if nearest is None:
all_within_threshold = False
break
# 距離は4番目の要素(インデックス3)
distance = nearest[3]
total_distance += distance
vertex_count += 1
if distance > threshold:
all_within_threshold = False
break
# すべての頂点が閾値内であれば、平均距離を記録
if all_within_threshold and vertex_count > 0:
average_distance = total_distance / vertex_count
average_distances[(obj1, obj2)] = average_distance
bm1.free()
# 最も平均距離が小さいコンテナを選択
best_containers = {} # {contained: (container, avg_distance)}
for (container, contained), avg_distance in average_distances.items():
if contained not in best_containers or avg_distance < best_containers[contained][1]:
best_containers[contained] = (container, avg_distance)
# 結果の辞書を構築
containing_objects = {}
for contained, (container, _) in best_containers.items():
if container not in containing_objects:
containing_objects[container] = []
containing_objects[container].append(contained)
if not containing_objects:
return {}
# 多重包有関係を統合し、各オブジェクトが一度だけ出現するようにする
parent_map = {}
for container, contained_list in containing_objects.items():
for child in contained_list:
parent_map[child] = container
def get_bounding_box_volume(obj):
try:
dims = getattr(obj, "dimensions", None)
if dims is None:
return 0.0
return float(dims[0]) * float(dims[1]) * float(dims[2])
except Exception:
return 0.0
def find_root(obj):
visited_list = []
visited_set = set()
current = obj
while current in parent_map and current not in visited_set:
visited_list.append(current)
visited_set.add(current)
current = parent_map[current]
if current in visited_set:
cycle_start = visited_list.index(current)
cycle_nodes = visited_list[cycle_start:]
root = max(
cycle_nodes,
key=lambda o: (
get_bounding_box_volume(o),
getattr(o, "name", str(id(o)))
)
)
else:
root = current
for node in visited_list:
parent_map[node] = root
return root
def collect_descendants(obj, visited):
result = []
for child in containing_objects.get(obj, []):
if child in visited:
continue
visited.add(child)
result.append(child)
result.extend(collect_descendants(child, visited))
return result
merged_containing_objects = {}
roots_in_order = []
for container in containing_objects.keys():
root = find_root(container)
if root not in merged_containing_objects:
merged_containing_objects[root] = []
roots_in_order.append(root)
assigned_objects = set()
for root in roots_in_order:
visited = {root}
descendants = collect_descendants(root, visited)
for child in descendants:
if child in assigned_objects:
continue
merged_containing_objects[root].append(child)
assigned_objects.add(child)
for contained, (container, _) in best_containers.items():
if contained in assigned_objects:
continue
root = find_root(container)
if root not in merged_containing_objects:
merged_containing_objects[root] = []
roots_in_order.append(root)
if contained == root:
continue
merged_containing_objects[root].append(contained)
assigned_objects.add(contained)
final_result = {root: merged_containing_objects[root] for root in roots_in_order if merged_containing_objects[root]}
if final_result:
seen_objects = set()
duplicate_objects = set()
for container, contained_list in final_result.items():
if container in seen_objects:
duplicate_objects.add(container)
else:
seen_objects.add(container)
for obj in contained_list:
if obj in seen_objects:
duplicate_objects.add(obj)
else:
seen_objects.add(obj)
if duplicate_objects:
duplicate_names = sorted(
{getattr(obj, "name", str(id(obj))) for obj in duplicate_objects}
)
print(
"find_containing_objects: 同じオブジェクトが複数回検出されました -> "
+ ", ".join(duplicate_names)
)
return final_result
def temporarily_merge_for_weight_transfer(container_obj, contained_objs, base_armature, base_avatar_data, clothing_avatar_data, field_path, clothing_armature, blend_shape_settings, cloth_metadata):
"""
オブジェクトを一時的に結合し、weight transferのみを適用した後、結果を元のオブジェクトに復元する
Parameters:
container_obj: 包含するオブジェクト
contained_objs: 包含されるオブジェクトのリスト
base_armature: ベースのアーマチュア
base_avatar_data: ベースアバターデータ
clothing_avatar_data: 衣装アバターデータ
field_path: フィールドパス
clothing_armature: 衣装のアーマチュア
cloth_metadata: クロスメタデータ
"""
# 元のデータを保存
original_active = bpy.context.active_object
original_mode = bpy.context.mode
# すべてのオブジェクトの選択状態を保存
original_selection = {obj: obj.select_get() for obj in bpy.data.objects}
# 一時的なリストにすべてのオブジェクトを追加
to_merge = [container_obj] + contained_objs
# 頂点グループの情報を保存
vertex_groups_data = {}
for obj in to_merge:
vertex_groups_data[obj.name] = {}
for vg in obj.vertex_groups:
vg_data = []
for v in obj.data.vertices:
weight = 0.0
for g in v.groups:
if g.group == vg.index:
weight = g.weight
break
if weight > 0:
vg_data.append((v.index, weight))
vertex_groups_data[obj.name][vg.name] = vg_data
# すべてのオブジェクトの複製を作成
duplicated_objs = []
bpy.ops.object.select_all(action='DESELECT')
for obj in to_merge:
obj.select_set(True)
bpy.context.view_layer.objects.active = obj
bpy.ops.object.duplicate()
dup_obj = bpy.context.active_object
duplicated_objs.append(dup_obj)
bpy.ops.object.select_all(action='DESELECT')
# 複製したオブジェクトを結合
bpy.ops.object.select_all(action='DESELECT')
for obj in duplicated_objs:
obj.select_set(True)
bpy.context.view_layer.objects.active = duplicated_objs[0]
bpy.ops.object.join()
# 結合したオブジェクト
merged_obj = bpy.context.active_object
merged_obj.name = f"TempMerged_{container_obj.name}"
# 結合したオブジェクトに対してweight transferのみを適用
# process_weight_transfer(merged_obj, base_armature, base_avatar_data, field_path, clothing_armature, cloth_metadata)
process_weight_transfer_with_component_normalization(merged_obj, base_armature, base_avatar_data, clothing_avatar_data, field_path, clothing_armature, blend_shape_settings, cloth_metadata)
depsgraph = bpy.context.evaluated_depsgraph_get()
# モディファイア適用後のソースメッシュを取得
eval_merged_obj = merged_obj.evaluated_get(depsgraph)
eval_merged_mesh = eval_merged_obj.data
merged_world_coords = [merged_obj.matrix_world @ v.co for v in eval_merged_mesh.vertices]
# KDTreeを使用して最も近い頂点を高速に検索
kdtree = KDTree(len(merged_world_coords))
for i, v_co in enumerate(merged_world_coords):
kdtree.insert(v_co, i)
kdtree.balance()
# 頂点グループの情報を元のオブジェクトに復元
for obj in to_merge:
# 既存の頂点グループをクリア
for vg in obj.vertex_groups[:]:
obj.vertex_groups.remove(vg)
# 結合オブジェクトから新しい頂点グループを作成
for vg in merged_obj.vertex_groups:
obj.vertex_groups.new(name=vg.name)
# 評価済みの頂点座標を取得(現在の状態)
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
obj_world_coords = [obj.matrix_world @ v.co for v in eval_mesh.vertices]
# 元のオブジェクトの各頂点に対して最も近い頂点を探し、ウェイトをコピー
for i, vert_co in enumerate(obj_world_coords):
co, merged_vert_idx, dist = kdtree.find(vert_co)
# マージされたオブジェクト内の対応する頂点からウェイト情報をコピー
if merged_vert_idx >= 0:
for g in merged_obj.data.vertices[merged_vert_idx].groups:
vg_name = merged_obj.vertex_groups[g.group].name
if vg_name in obj.vertex_groups:
obj.vertex_groups[vg_name].add([i], g.weight, 'REPLACE')
# 一時的なオブジェクトを削除
bpy.ops.object.select_all(action='DESELECT')
merged_obj.select_set(True)
bpy.ops.object.delete()
# 元の選択状態を復元
for obj, was_selected in original_selection.items():
if obj.name in bpy.data.objects: # オブジェクトが存在することを確認
obj.select_set(was_selected)
# 元のアクティブオブジェクトと元のモードを復元
if original_active and original_active.name in bpy.data.objects:
bpy.context.view_layer.objects.active = original_active
if original_mode != 'OBJECT':
bpy.ops.object.mode_set(mode=original_mode)
def group_components_by_weight_pattern(obj, base_avatar_data, clothing_armature):
"""
同じウェイトパターンを持つ連結成分をグループ化する
Parameters:
obj: 処理対象のメッシュオブジェクト
base_avatar_data: ベースアバターデータ
Returns:
dict: ウェイトパターンをキー、連結成分のリストを値とする辞書
"""
# BMeshを作成
bm = bmesh.new()
bm.from_mesh(obj.data)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
base_obj = bpy.data.objects.get("Body.BaseAvatar")
if not base_obj:
raise Exception("Base avatar mesh (Body.BaseAvatar) not found")
# すべての連結成分を取得
components = find_connected_components(obj)
# 各コンポーネントの頂点数を表示
# for j, comp in enumerate(components):
# print(f"Component {j}: {len(comp)} vertices")
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
if clothing_armature:
target_groups.update(bone.name for bone in clothing_armature.data.bones)
# メッシュ内に存在する対象グループのみを抽出
existing_target_groups = {vg.name for vg in obj.vertex_groups if vg.name in target_groups}
# 各連結成分のウェイトパターンを計算
component_patterns = {}
uniform_components = []
if "Rigid2" not in obj.vertex_groups:
obj.vertex_groups.new(name="Rigid2")
rigid_group = obj.vertex_groups["Rigid2"]
for component in components:
# コンポーネント内の各頂点のウェイトパターンを収集
vertex_weights = []
for vert_idx in component:
vert = obj.data.vertices[vert_idx]
weights = {group: 0.0 for group in existing_target_groups}
for g in vert.groups:
group_name = obj.vertex_groups[g.group].name
if group_name in existing_target_groups:
weights[group_name] = g.weight
vertex_weights.append(weights)
# 頂点ウェイトが空の場合は次のコンポーネントへスキップ
if not vertex_weights:
continue
# チェック対象のすべてのグループで同じウェイトパターンかチェック
is_uniform = True
first_weights = vertex_weights[0]
for weights in vertex_weights[1:]:
for group_name in existing_target_groups:
if abs(weights[group_name] - first_weights[group_name]) >= 0.0001:
is_uniform = False
break
if not is_uniform:
break
# 一様なウェイトパターンを持つ連結成分のみを記録
if is_uniform:
# 評価済みメッシュから頂点座標を取得
component_points = []
for idx in component:
if idx < len(bm.verts):
component_points.append(obj.matrix_world @ bm.verts[idx].co)
# 素体メッシュとの交差をチェック
if len(component_points) >= 3:
# OBBを計算
obb = calculate_obb_from_points(component_points)
# OBBが計算できない場合はスキップ
if obb is not None:
# 素体メッシュとの交差をチェック
if check_mesh_obb_intersection(base_obj, obb):
print(f"Component with {len(component)} vertices intersects with base mesh, excluding from rigid transfer")
continue
uniform_components.append(component)
# ウェイトパターンをハッシュ可能な形式に変換
pattern_tuple = tuple(sorted((k, round(v, 4)) for k, v in first_weights.items() if v > 0))
# pattern_tupleが空でない場合のみ処理を実行
if pattern_tuple:
# 一様なウェイトパターンを持つ連結成分の頂点すべてにRigid頂点グループのウェイトを1に設定
for vert_idx in component:
rigid_group.add([vert_idx], 1.0, 'REPLACE')
if pattern_tuple not in component_patterns:
component_patterns[pattern_tuple] = []
component_patterns[pattern_tuple].append(component)
# BMeshを解放
bm.free()
print(f"Found {len(components)} connected components in {obj.name}")
print(f"Found {len(component_patterns)} uniform weight patterns in {obj.name}")
# デバッグ用に各パターンの詳細を表示
for i, (pattern, components_list) in enumerate(component_patterns.items()):
total_vertices = sum(len(comp) for comp in components_list)
print(f"Pattern {i}: {pattern}")
print(f" Components: {len(components_list)}, Total vertices: {total_vertices}")
# 各コンポーネントの頂点数を表示
for j, comp in enumerate(components_list):
print(f" Component {j}: {len(comp)} vertices")
return component_patterns
def calculate_obb_from_points(points):
"""
点群からOriented Bounding Box (OBB)を計算する
Parameters:
points: 点群のリスト(Vector型またはタプル)
Returns:
dict: OBBの情報を含む辞書
'center': 中心座標
'axes': 主軸(3x3の行列、各列が軸)
'radii': 各軸方向の半径
または None: 計算不能な場合
"""
# 点群が少なすぎる場合はNoneを返す
if len(points) < 3:
print(f"警告: 点群が少なすぎます({len(points)}点)。OBB計算をスキップします。")
return None
try:
# 点群をnumpy配列に変換
points_np = np.array([[p.x, p.y, p.z] for p in points])
# 点群の中心を計算
center = np.mean(points_np, axis=0)
# 中心を原点に移動
centered_points = points_np - center
# 共分散行列を計算
cov_matrix = np.cov(centered_points, rowvar=False)
# 行列のランクをチェック
if np.linalg.matrix_rank(cov_matrix) < 3:
print("警告: 共分散行列のランクが不足しています。OBB計算をスキップします。")
return None
# 固有値と固有ベクトルを計算
eigenvalues, eigenvectors = np.linalg.eigh(cov_matrix)
# 固有値が非常に小さい場合はスキップ
if np.any(np.abs(eigenvalues) < 1e-10):
print("警告: 固有値が非常に小さいです。OBB計算をスキップします。")
return None
# 固有値の大きさでソート(降順)
idx = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:, idx]
# 主軸を取得(列ベクトルとして)
axes = eigenvectors
# 各軸方向の点の投影を計算
projections = np.abs(np.dot(centered_points, axes))
# 各軸方向の最大値を半径として使用
radii = np.max(projections, axis=0)
# 結果を辞書として返す
return {
'center': center,
'axes': axes,
'radii': radii
}
except Exception as e:
print(f"OBB計算中にエラーが発生しました: {e}")
return None
def reset_bone_weights(target_obj, bone_groups):
"""指定された頂点グループのウェイトを0に設定"""
for vert in target_obj.data.vertices:
for group in target_obj.vertex_groups:
if group.name in bone_groups:
try:
group.add([vert.index], 0, 'REPLACE')
except RuntimeError:
continue
def store_weights(target_obj, bone_groups_to_store):
"""頂点グループのウェイトを保存"""
weights = {}
for vert in target_obj.data.vertices:
weights[vert.index] = {}
for group in target_obj.vertex_groups:
if group.name in bone_groups_to_store:
try:
for g in vert.groups:
if g.group == group.index:
weights[vert.index][group.name] = g.weight
break
except RuntimeError:
continue
return weights
def restore_weights(target_obj, stored_weights):
"""保存したウェイトを復元"""
for vert_idx, groups in stored_weights.items():
for group_name, weight in groups.items():
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups[group_name].add([vert_idx], weight, 'REPLACE')
def pre_process_a_pose_setup(target_obj, armature, base_avatar_data, clothing_avatar_data, clothing_armature, humanoid_to_bone):
"""
Aポーズ処理の前処理を行う
Parameters:
target_obj: 処理対象のメッシュオブジェクト
armature: アーマチュアオブジェクト
base_avatar_data: ベースアバターデータ
clothing_avatar_data: 衣装アバターデータ
clothing_armature: 衣装のアーマチュア
humanoid_to_bone: Humanoidボーンマッピング
Returns:
tuple: (non_humanoid_weights, temp_A_pose_shape_key_name)
"""
global _is_A_pose
non_humanoid_weights = {}
temp_A_pose_shape_key_name = "temp_A_pose_shape_key"
if _is_A_pose and armature and armature.type == 'ARMATURE':
print(" Aポーズのため処理を実行")
all_bone_groups = set()
for vertex_group in target_obj.vertex_groups:
all_bone_groups.add(vertex_group.name)
all_original_weights = store_weights(target_obj, all_bone_groups)
# Humanoidボーンおよびauxiliaryボーンに含まれないボーンの頂点グループウェイトを処理
print(" 非Humanoid/auxiliaryボーンのウェイト処理を開始")
# Humanoidボーンとauxiliaryボーンのセットを作成
humanoid_and_aux_bones = set()
# Humanoidボーンを追加
for bone_map in base_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
humanoid_and_aux_bones.add(bone_map["boneName"])
# auxiliaryボーンを追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set.get("auxiliaryBones", []):
humanoid_and_aux_bones.add(aux_bone)
# clothing_armatureのHumanoidボーンマッピングを作成
clothing_bones_to_humanoid = {}
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map and "humanoidBoneName" in bone_map:
clothing_bones_to_humanoid[bone_map["boneName"]] = bone_map["humanoidBoneName"]
# 追加した頂点グループを記録するセット
added_vertex_groups = set()
# target_objの全頂点グループをチェック
for vertex_group in target_obj.vertex_groups:
group_name = vertex_group.name
# Humanoidボーンまたはauxiliaryボーンに含まれる場合はスキップ
if group_name in humanoid_and_aux_bones:
continue
# clothing_armatureで親を辿ってHumanoidボーンを見つける
if clothing_armature and clothing_armature.type == 'ARMATURE':
current_bone = clothing_armature.data.bones.get(group_name)
target_humanoid_bone_name = None
while current_bone and current_bone.parent:
parent_bone = current_bone.parent
if parent_bone.name in clothing_bones_to_humanoid and clothing_bones_to_humanoid[parent_bone.name] in humanoid_to_bone:
target_humanoid_bone_name = humanoid_to_bone[clothing_bones_to_humanoid[parent_bone.name]]
break
current_bone = parent_bone
if target_humanoid_bone_name:
# target_objにHumanoidボーンの頂点グループが存在するかチェック
target_group = target_obj.vertex_groups.get(target_humanoid_bone_name)
if not target_group:
# 頂点グループを作成
target_group = target_obj.vertex_groups.new(name=target_humanoid_bone_name)
added_vertex_groups.add(target_humanoid_bone_name)
print(f" 頂点グループ '{target_humanoid_bone_name}' を追加")
# ウェイトを転送
source_group = vertex_group
if source_group:
# 各頂点のウェイトを転送
for vertex in target_obj.data.vertices:
try:
source_weight = 0.0
for g in target_obj.data.vertices[vertex.index].groups:
if g.group == source_group.index:
source_weight = g.weight
break
if source_weight > 0:
# 既存のウェイトがある場合は加算
try:
existing_weight = 0.0
for g in target_obj.data.vertices[vertex.index].groups:
if g.group == target_group.index:
existing_weight = g.weight
break
combined_weight = min(1.0, existing_weight + source_weight)
target_group.add([vertex.index], combined_weight, 'REPLACE')
if (target_humanoid_bone_name, vertex.index) not in non_humanoid_weights:
non_humanoid_weights[(target_humanoid_bone_name, vertex.index)] = 0.0
non_humanoid_weights[(target_humanoid_bone_name, vertex.index)] = non_humanoid_weights[(target_humanoid_bone_name, vertex.index)] + source_weight
except RuntimeError:
# 既存のウェイトがない場合は新規追加
target_group.add([vertex.index], source_weight, 'ADD')
except RuntimeError:
# 頂点がソースグループに属していない場合
pass
print(f" ウェイト転送: '{group_name}' -> '{target_humanoid_bone_name}'")
set_armature_modifier_visibility(target_obj, True, True)
# LeftUpperArmとRightUpperArmボーンにY軸回転を適用
print(" LeftUpperArmとRightUpperArmボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperArmとRightUpperArmのboneNameを取得
left_upper_arm_bone = None
right_upper_arm_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperArm":
left_upper_arm_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperArm":
right_upper_arm_bone = bone_map.get("boneName")
# LeftUpperArmボーンに45度のY軸回転を適用
if left_upper_arm_bone and left_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperArmボーンに-45度のY軸回転を適用
if right_upper_arm_bone and right_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
shape_key_state = save_shape_key_state(target_obj)
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 一時シェイプキーを作成
if target_obj.data.shape_keys is None:
target_obj.shape_key_add(name='Basis')
if target_obj.data.shape_keys and temp_A_pose_shape_key_name in target_obj.data.shape_keys.key_blocks:
temp_A_pose_shape_key = target_obj.data.shape_keys.key_blocks[temp_A_pose_shape_key_name]
else:
temp_A_pose_shape_key = target_obj.shape_key_add(name=temp_A_pose_shape_key_name)
#現在の評価済みメッシュを取得、アーマチュア変形後の状態を保存
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
for i, vert in enumerate(eval_mesh.vertices):
temp_A_pose_shape_key.data[i].co = vert.co.copy()
set_armature_modifier_visibility(target_obj, False, False)
restore_shape_key_state(target_obj, shape_key_state)
temp_A_pose_shape_key.value = 1.0
# 追加した頂点グループを削除
if added_vertex_groups:
print(" 追加した頂点グループを削除中...")
for group_name in added_vertex_groups:
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups.remove(target_obj.vertex_groups[group_name])
print(f" 頂点グループ '{group_name}' を削除")
reset_bone_weights(target_obj, all_bone_groups)
restore_weights(target_obj, all_original_weights)
return non_humanoid_weights, temp_A_pose_shape_key_name
def post_process_a_pose_cleanup(target_obj, armature, base_avatar_data, non_humanoid_weights, temp_A_pose_shape_key_name):
"""
Aポーズ処理の後処理を行う
Parameters:
target_obj: 処理対象のメッシュオブジェクト
armature: アーマチュアオブジェクト
base_avatar_data: ベースアバターデータ
non_humanoid_weights: 非Humanoidウェイト辞書
temp_A_pose_shape_key_name: 一時シェイプキー名
"""
global _is_A_pose
if _is_A_pose and armature and armature.type == 'ARMATURE':
print(" Aポーズのため処理を実行")
set_armature_modifier_visibility(target_obj, True, True)
all_bone_groups = set()
for vertex_group in target_obj.vertex_groups:
all_bone_groups.add(vertex_group.name)
all_original_weights = store_weights(target_obj, all_bone_groups)
for (bone_name, vertex_index), weight in non_humanoid_weights.items():
if bone_name not in target_obj.vertex_groups:
target_obj.vertex_groups.new(name=bone_name)
group_index = target_obj.vertex_groups[bone_name].index
source_weight = 0.0
for g in target_obj.data.vertices[vertex_index].groups:
if g.group == group_index:
source_weight = g.weight
break
combined_weight = min(1.0, source_weight + weight)
target_obj.vertex_groups[bone_name].add([vertex_index], combined_weight, 'REPLACE')
# LeftUpperArmとRightUpperArmボーンにY軸逆回転を適用
print(" LeftUpperArmとRightUpperArmボーンにY軸逆回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperArmとRightUpperArmのboneNameを取得
left_upper_arm_bone = None
right_upper_arm_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperArm":
left_upper_arm_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperArm":
right_upper_arm_bone = bone_map.get("boneName")
# LeftUpperArmボーンに-90度のY軸回転を適用
if left_upper_arm_bone and left_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-90度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-90), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperArmボーンに90度のY軸回転を適用
if right_upper_arm_bone and right_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での90度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(90), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
shape_key_state = save_shape_key_state(target_obj)
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
if temp_A_pose_shape_key_name in target_obj.data.shape_keys.key_blocks:
temp_A_pose_shape_key = target_obj.data.shape_keys.key_blocks[temp_A_pose_shape_key_name]
temp_A_pose_shape_key.value = 1.0
#現在の評価済みメッシュを取得、アーマチュア変形後の状態を保存
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# Basisのシェイプキーを取得
basis_shape_key = target_obj.data.shape_keys.key_blocks["Basis"]
for i, vert in enumerate(eval_mesh.vertices):
basis_shape_key.data[i].co = vert.co.copy()
# LeftUpperArmボーンに45度のY軸回転を適用
if left_upper_arm_bone and left_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperArmボーンに-45度のY軸回転を適用
if right_upper_arm_bone and right_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
restore_shape_key_state(target_obj, shape_key_state)
# 一時シェイプキーを削除
if temp_A_pose_shape_key_name in target_obj.data.shape_keys.key_blocks:
temp_A_pose_shape_key = target_obj.data.shape_keys.key_blocks[temp_A_pose_shape_key_name]
target_obj.shape_key_remove(temp_A_pose_shape_key)
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
set_armature_modifier_visibility(target_obj, False, False)
reset_bone_weights(target_obj, all_bone_groups)
restore_weights(target_obj, all_original_weights)
def process_weight_transfer_with_component_normalization(target_obj, armature, base_avatar_data, clothing_avatar_data, field_path, clothing_armature, blend_shape_settings, cloth_metadata=None):
"""
ウェイト転送処理を行い、連結成分ごとにウェイトを正規化する
Parameters:
target_obj: 処理対象のメッシュオブジェクト
armature: アーマチュアオブジェクト
base_avatar_data: ベースアバターデータ
clothing_avatar_data: 衣装アバターデータ
field_path: フィールドパス
clothing_armature: 衣装のアーマチュア
cloth_metadata: クロスメタデータ
"""
import time
start_total = time.time()
print(f"process_weight_transfer_with_component_normalization 処理開始: {target_obj.name}")
# humanoid_to_boneマッピングを作成
humanoid_to_bone = {}
for bone_map in base_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map and "humanoidBoneName" in bone_map:
humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
# 素体メッシュを取得
start_time = time.time()
base_obj = bpy.data.objects.get("Body.BaseAvatar")
if not base_obj:
raise Exception("Base avatar mesh (Body.BaseAvatar) not found")
left_base_obj = bpy.data.objects["Body.BaseAvatar.LeftOnly"]
right_base_obj = bpy.data.objects["Body.BaseAvatar.RightOnly"]
print(f"Set blend_shape_settings: {blend_shape_settings}")
if base_obj.data.shape_keys:
for blend_shape_setting in blend_shape_settings:
if blend_shape_setting['name'] in base_obj.data.shape_keys.key_blocks:
base_obj.data.shape_keys.key_blocks[blend_shape_setting['name']].value = blend_shape_setting['value']
left_base_obj.data.shape_keys.key_blocks[blend_shape_setting['name']].value = blend_shape_setting['value']
right_base_obj.data.shape_keys.key_blocks[blend_shape_setting['name']].value = blend_shape_setting['value']
print(f"Set {blend_shape_setting['name']} to {blend_shape_setting['value']}")
# 評価済みのメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_target_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_target_obj.data
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# メッシュ内に存在する対象グループのみを抽出
existing_target_groups = {vg.name for vg in target_obj.vertex_groups if vg.name in target_groups}
print(f"準備時間: {time.time() - start_time:.2f}秒")
# 処理前に同じウェイトパターンを持つ連結成分をグループ化
start_time = time.time()
component_patterns = group_components_by_weight_pattern(target_obj, base_avatar_data, clothing_armature)
print(f"コンポーネントパターン抽出時間: {time.time() - start_time:.2f}秒")
# 処理前の各頂点のウェイトパターンを保存
start_time = time.time()
original_vertex_weights = {}
for vert_idx, vert in enumerate(target_obj.data.vertices):
weights = {}
for group_name in existing_target_groups:
weight = 0.0
for g in vert.groups:
if target_obj.vertex_groups[g.group].name == group_name:
weight = g.weight
break
if weight > 0.0001:
weights[group_name] = weight
original_vertex_weights[vert_idx] = weights
print(f"元のウェイト保存時間: {time.time() - start_time:.2f}秒")
# 通常のウェイト転送処理を実行
start_time = time.time()
process_weight_transfer(target_obj, armature, base_avatar_data, clothing_avatar_data, field_path, clothing_armature, cloth_metadata)
print(f"通常ウェイト転送処理時間: {time.time() - start_time:.2f}秒")
start_time = time.time()
new_component_patterns = {}
# 各パターンのグループに対して処理
for pattern, components in component_patterns.items():
# patternにexisting_target_groupsに含まれないグループしかない場合
if not any(group in existing_target_groups for group in pattern):
all_deform_groups = set(existing_target_groups)
if clothing_armature:
all_deform_groups.update(bone.name for bone in clothing_armature.data.bones)
# NonHumanoidDifferenceグループのウェイトが存在するかチェックしつつ、そのウェイトが最大となる頂点を取得
non_humanoid_difference_group = target_obj.vertex_groups.get("NonHumanoidDifference")
is_non_humanoid_difference_group = False
max_weight = 0.0
if non_humanoid_difference_group:
for component in components:
for vert_idx in component:
vert = target_obj.data.vertices[vert_idx]
for g in vert.groups:
if g.group == non_humanoid_difference_group.index and g.weight > 0.0001:
is_non_humanoid_difference_group = True
if g.weight > max_weight:
max_weight = g.weight
# NonHumanoidDifferenceグループのウェイトが存在する場合、そのウェイトが最大となる頂点のウェイトパターンの平均ウェイトを他のすべての頂点に適用
if is_non_humanoid_difference_group:
max_avg_pattern = {}
count = 0
for component in components:
for vert_idx in component:
vert = target_obj.data.vertices[vert_idx]
for g in vert.groups:
if g.group == non_humanoid_difference_group.index and g.weight == max_weight:
for g2 in vert.groups:
if target_obj.vertex_groups[g2.group].name in all_deform_groups:
if g2.group not in max_avg_pattern:
max_avg_pattern[g2.group] = g2.weight
else:
max_avg_pattern[g2.group] += g2.weight
count += 1
break
if count > 0:
for group_name, weight in max_avg_pattern.items():
max_avg_pattern[group_name] = weight / count
for component in components:
for vert_idx in component:
vert = target_obj.data.vertices[vert_idx]
for g in vert.groups:
if g.group not in max_avg_pattern and target_obj.vertex_groups[g.group].name in all_deform_groups:
g.weight = 0.0
for max_group_id, max_weight in max_avg_pattern.items():
group = target_obj.vertex_groups[max_group_id]
group.add([vert_idx], max_weight, 'REPLACE')
continue
# patternからexisting_target_groupsに含まれるグループのみを抽出
original_pattern_dict = {}
for group_name, weight in pattern:
original_pattern_dict[group_name] = weight
original_pattern = tuple(sorted((k, v) for k, v in original_pattern_dict.items() if k in existing_target_groups))
# 各グループ内のすべての頂点のウェイトを収集
all_weights = {group: [] for group in existing_target_groups}
all_vertices = set()
for component in components:
for vert_idx in component:
all_vertices.add(vert_idx)
vert = target_obj.data.vertices[vert_idx]
for group_name in existing_target_groups:
weight = 0.0
for g in vert.groups:
if target_obj.vertex_groups[g.group].name == group_name:
weight = g.weight
break
all_weights[group_name].append(weight)
# 各グループの平均ウェイトを計算
avg_weights = {}
for group_name, weights in all_weights.items():
if weights:
avg_weights[group_name] = sum(weights) / len(weights)
else:
avg_weights[group_name] = 0.0
# すべての頂点に平均ウェイトを適用
for vert_idx in all_vertices:
for group_name, avg_weight in avg_weights.items():
group = target_obj.vertex_groups[group_name]
if avg_weight > 0.0001:
group.add([vert_idx], avg_weight, 'REPLACE')
else:
group.add([vert_idx], 0.0, 'REPLACE')
# component_patternsのpatternを更新
new_pattern = tuple(sorted((k, round(v, 4)) for k, v in avg_weights.items() if v > 0.0001))
new_component_patterns[(new_pattern, original_pattern)] = components
component_patterns = new_component_patterns
print(f"コンポーネントパターン正規化時間: {time.time() - start_time:.2f}秒")
# コンポーネントパターンに含まれる頂点のOBBを計算し、周辺の頂点に影響を与える処理
if component_patterns:
# OBBデータ収集
start_time = time.time()
# オブジェクトモードで評価済みのメッシュを取得
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
target_obj.select_set(True)
bpy.context.view_layer.objects.active = target_obj
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = target_obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# 安全チェック:評価済みメッシュが空でないことを確認
if len(eval_mesh.vertices) == 0:
print(f"警告: {target_obj.name} の評価済みメッシュに頂点がありません。OBB計算をスキップします。")
return
# EDITモードに入る前に必要なデータを収集
obb_data = []
all_rigid_component_vertices = set()
for (new_pattern, original_pattern), components in component_patterns.items():
# コンポーネント内のすべての頂点を収集
for component in components:
all_rigid_component_vertices.update(component)
component_count = 0
# 各パターンのコンポーネントに対して処理
for (new_pattern, original_pattern), components in component_patterns.items():
# 新しいパターンのウェイト情報を辞書に変換
pattern_weights = {}
for group_name, weight in new_pattern:
pattern_weights[group_name] = weight
# オリジナルパターンのウェイト情報を辞書に変換
original_pattern_weights = {}
for group_name, weight in original_pattern:
original_pattern_weights[group_name] = weight
# 同じパターンを持つすべてのコンポーネントの頂点を収集
all_component_vertices = set()
for component in components:
all_component_vertices.update(component)
# 各コンポーネントの頂点座標とサイズ情報を取得
component_coords = {}
component_sizes = {}
for component_idx, component in enumerate(components):
coords = []
for vert_idx in component:
if vert_idx < len(eval_mesh.vertices):
coords.append(eval_obj.matrix_world @ eval_mesh.vertices[vert_idx].co)
if coords:
component_coords[component_idx] = coords
# コンポーネントのサイズを計算(最大距離またはバウンディングボックスのサイズ)
size = calculate_component_size(coords)
component_sizes[component_idx] = size
# 空のコンポーネントをスキップ
if not component_coords:
continue
# コンポーネント間の距離に基づいてクラスタリング
# サイズに基づいて適応的に閾値を決定
clusters = cluster_components_by_adaptive_distance(component_coords, component_sizes)
# 各クラスターに対してOBBを計算
for cluster_idx, cluster in enumerate(clusters):
# クラスター内のすべての頂点座標を収集
cluster_vertices = set()
cluster_coords = []
for comp_idx in cluster:
for vert_idx in components[comp_idx]:
cluster_vertices.add(vert_idx)
if vert_idx < len(eval_mesh.vertices):
cluster_coords.append(eval_obj.matrix_world @ eval_mesh.vertices[vert_idx].co)
# 頂点が少なすぎる場合はスキップ
if len(cluster_coords) < 3:
print(f"警告: パターン {pattern} のクラスター {cluster_idx} の有効な頂点が少なすぎます({len(cluster_coords)}点)。スキップします。")
continue
# OBBを計算
obb = calculate_obb_from_points(cluster_coords)
# OBB計算が失敗した場合はスキップ
if obb is None:
print(f"警告: パターン {pattern} のクラスター {cluster_idx} のOBB計算に失敗しました。スキップします。")
continue
# OBBを20%膨張
obb['radii'] = [radius * 1.3 for radius in obb['radii']]
# 頂点選択用のデータを保存
vertices_in_obb = []
for vert_idx, vert in enumerate(target_obj.data.vertices):
if vert_idx not in all_rigid_component_vertices and vert_idx < len(eval_mesh.vertices):
try:
# 評価済みの頂点のワールド座標
vert_world = eval_obj.matrix_world @ eval_mesh.vertices[vert_idx].co
# OBBの中心からの相対位置
relative_pos = vert_world - Vector(obb['center'])
# OBBの各軸に沿った投影
projections = [abs(relative_pos.dot(Vector(obb['axes'][:, i]))) for i in range(3)]
# すべての軸で投影が半径以内ならOBB内
if all(proj <= radius for proj, radius in zip(projections, obb['radii'])):
vertices_in_obb.append(vert_idx)
except Exception as e:
print(f"警告: 頂点 {vert_idx} のOBBチェック中にエラーが発生しました: {e}")
continue
if not vertices_in_obb:
print(f"警告: パターン {pattern} のクラスター {cluster_idx} のOBB内に頂点が見つかりませんでした。スキップします。")
continue
obb_data.append({
'component_vertices': cluster_vertices,
'vertices_in_obb': vertices_in_obb,
'component_id': component_count,
'pattern_weights': pattern_weights,
'original_pattern_weights': original_pattern_weights
})
component_count += 1
print(f"OBBデータ収集時間: {time.time() - start_time:.2f}秒")
# OBBデータがない場合は処理をスキップ
if not obb_data:
print("警告: 有効なOBBデータがありません。処理をスキップします。")
return
start_time = time.time()
#vert_neighbors = create_vertex_neighbors_list(target_obj, expand_distance=0.04, sigma=0.02)
neighbors_info, offsets, num_verts = create_vertex_neighbors_array(target_obj, expand_distance=0.02, sigma=0.00659)
print(f"頂点近傍リスト作成時間: {time.time() - start_time:.2f}秒")
# OBB処理開始
start_time = time.time()
# 編集モードに入る
bpy.ops.object.mode_set(mode='EDIT')
# 各OBBデータに対して処理
for obb_idx, data in enumerate(obb_data):
obb_start = time.time()
# "Connected"頂点グループを作成または取得
connected_group = target_obj.vertex_groups.new(name=f"Connected_{data['component_id']}")
print(f" Connected頂点グループ作成: {connected_group.name}")
# すべての選択を解除
bpy.ops.mesh.select_all(action='DESELECT')
# BMeshを使用して頂点を選択
bm = bmesh.from_edit_mesh(target_obj.data)
bm.verts.ensure_lookup_table()
# OBB内の頂点を選択
obb_vertex_select_start = time.time()
for vert_idx in data['vertices_in_obb']:
if vert_idx < len(bm.verts):
bm.verts[vert_idx].select = True
# BMeshの変更をメッシュに反映
bmesh.update_edit_mesh(target_obj.data)
print(f" OBB内頂点選択時間: {time.time() - obb_vertex_select_start:.2f}秒")
# 選択された頂点に含まれるエッジループを検出
# 現在の選択を保存
edge_loop_start = time.time()
initial_selection = {v.index for v in bm.verts if v.select}
if initial_selection:
# 選択された頂点から構成されるエッジを取得
selected_edges = [e for e in bm.edges if all(v.select for v in e.verts)]
# 完全に含まれる閉じたエッジループを記録
complete_loops = set()
# 各エッジに対してループ選択を実行
edge_count = len(selected_edges)
print(f" 処理対象エッジ数: {edge_count}")
for edge_idx, edge in enumerate(selected_edges):
if edge_idx % 100 == 0 and edge_idx > 0:
print(f" エッジ処理進捗: {edge_idx}/{edge_count} ({edge_idx/edge_count*100:.1f}%)")
# 現在の選択をクリア
bpy.ops.mesh.select_all(action='DESELECT')
# エッジを選択
edge.select = True
bmesh.update_edit_mesh(target_obj.data)
# エッジループを選択
bpy.ops.mesh.loop_multi_select(ring=False)
# 選択されたループの頂点とエッジを取得
bm = bmesh.from_edit_mesh(target_obj.data)
loop_verts = {v.index for v in bm.verts if v.select}
# ループが閉じているか確認(各頂点が正確に2つの選択されたエッジに接続されている)
is_closed_loop = True
for v in bm.verts:
if v.select:
# 選択された頂点に接続する選択されたエッジの数をカウント
selected_edge_count = sum(1 for e in v.link_edges if e.select)
# 選択された頂点に接続するエッジの総数をカウント
total_edge_count = len(v.link_edges)
# ループに含まれる頂点は、ループ内の2つの頂点とループ外の2つの頂点、
# 合計4つの頂点とエッジでつながっている必要がある
if selected_edge_count != 2 or total_edge_count != 4:
is_closed_loop = False
break
# ループが閉じていて、完全に初期選択内に含まれるか確認
# if is_closed_loop and loop_verts.issubset(initial_selection):
if is_closed_loop:
# ループ内の頂点の元のウェイトパターンがコンポーネントのパターンと類似しているか確認
pattern_check_start = time.time()
is_similar_pattern = True
pattern_weights = data['original_pattern_weights']
for vert_idx in loop_verts:
if vert_idx in original_vertex_weights:
orig_weights = original_vertex_weights[vert_idx]
# ウェイトパターンの類似性をチェック
similarity_score = 0.0
total_weight = 0.0
# パターン内の各グループについて
for group_name, pattern_weight in pattern_weights.items():
orig_weight = orig_weights.get(group_name, 0.0)
diff = abs(pattern_weight - orig_weight)
similarity_score += diff
total_weight += pattern_weight
# 類似性スコアを正規化(0に近いほど類似)
if total_weight > 0:
normalized_score = similarity_score / total_weight
# 閾値を超える場合は類似していないと判断
if normalized_score > 0.05: # 閾値は調整可能
is_similar_pattern = False
break
if is_similar_pattern:
complete_loops.update(loop_verts)
# すべての選択をクリア
bpy.ops.mesh.select_all(action='DESELECT')
# 閉じたループのみを選択
bm = bmesh.from_edit_mesh(target_obj.data)
for vert in bm.verts:
if vert.index in complete_loops:
vert.select = True
bmesh.update_edit_mesh(target_obj.data)
print(f" エッジループ検出時間: {time.time() - edge_loop_start:.2f}秒")
# 選択範囲を拡大
select_more_start = time.time()
for _ in range(1):
bpy.ops.mesh.select_more()
# 選択された頂点のインデックスを取得
bm = bmesh.from_edit_mesh(target_obj.data)
selected_verts = [v.index for v in bm.verts if v.select]
print(f" 選択範囲拡大時間: {time.time() - select_more_start:.2f}秒")
if len(selected_verts) == 0:
print(f"警告: OBB {obb_idx} 内に頂点が見つかりませんでした。スキップします。")
continue
# オブジェクトモードに戻る
mode_switch_start = time.time()
bpy.ops.object.mode_set(mode='OBJECT')
print(f" モード切替時間: {time.time() - mode_switch_start:.2f}秒")
# 選択された頂点にConnected頂点グループのウェイトを設定
weight_assign_start = time.time()
for vert_idx in selected_verts:
if vert_idx not in data['component_vertices']: # コンポーネント内の頂点は除外
connected_group.add([vert_idx], 1.0, 'REPLACE')
print(f" ウェイト割り当て時間: {time.time() - weight_assign_start:.2f}秒")
# Connectedグループにスムージングを適用
smoothing_start = time.time()
bpy.ops.object.select_all(action='DESELECT')
target_obj.select_set(True)
bpy.context.view_layer.objects.active = target_obj
# Connectedグループを選択
for i, group in enumerate(target_obj.vertex_groups):
target_obj.vertex_groups.active_index = i
if group.name == f"Connected_{data['component_id']}":
break
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
# スムージングを適用
smooth_op_start = time.time()
bpy.ops.object.vertex_group_smooth(factor=0.5, repeat=3, expand=0.5)
print(f" 標準スムージング時間: {time.time() - smooth_op_start:.2f}秒")
custom_smooth_start = time.time()
#custom_max_vertex_group(target_obj, f"Connected_{data['component_id']}", vert_neighbors, repeat=1, weight_factor=1.0)
custom_max_vertex_group_numpy(target_obj, f"Connected_{data['component_id']}", neighbors_info, offsets, num_verts, repeat=3, weight_factor=1.0)
print(f" カスタムスムージング時間: {time.time() - custom_smooth_start:.2f}秒")
bpy.ops.object.mode_set(mode='OBJECT')
print(f" スムージング処理時間: {time.time() - smoothing_start:.2f}秒")
# スムージング後、original_patternと各頂点のoriginal_vertex_weightsの差に基づいてウェイトを減衰
decay_start = time.time()
connected_group = target_obj.vertex_groups[f"Connected_{data['component_id']}"]
original_pattern_weights = data['original_pattern_weights']
for vert_idx, vert in enumerate(target_obj.data.vertices):
if vert_idx in data['component_vertices']:
connected_group.add([vert_idx], 0.0, 'REPLACE')
continue
if vert_idx not in data['component_vertices'] and vert_idx in original_vertex_weights: # コンポーネント内の頂点は除外
# 元のウェイトパターンを取得
orig_weights = original_vertex_weights[vert_idx]
# パターンとの差異を計算
similarity_score = 0.0
total_weight = 0.0
orig_weight_dict = {}
# パターン内の各グループについて
for group_name, pattern_weight in original_pattern_weights.items():
orig_weight = orig_weights.get(group_name, 0.0)
diff = abs(pattern_weight - orig_weight)
similarity_score += diff
total_weight += pattern_weight
orig_weight_dict[group_name] = orig_weight
# 類似性スコアを正規化(0に近いほど類似)
if total_weight > 0:
normalized_score = similarity_score / total_weight
# 類似性に基づいて減衰係数を計算(類似性が低いほど減衰が強い)
decay_factor = 1.0 - min(normalized_score * 3.33333, 1.0) # 最大90%まで減衰
# Connectedグループのウェイトを取得
connected_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == connected_group.index:
connected_weight = g.weight
break
# 減衰したウェイトを適用
if normalized_score > 0.3:
connected_group.add([vert_idx], 0.0, 'REPLACE')
else:
connected_group.add([vert_idx], connected_weight * decay_factor, 'REPLACE')
else:
connected_group.add([vert_idx], 0.0, 'REPLACE')
print(f" ウェイト減衰時間: {time.time() - decay_start:.2f}秒")
print(f" OBB {obb_idx+1}/{len(obb_data)} 処理時間: {time.time() - obb_start:.2f}秒")
# 編集モードに戻る(次のループのため)
if obb_idx < len(obb_data) - 1:
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.object.mode_set(mode='OBJECT')
print(f"OBB処理時間: {time.time() - start_time:.2f}秒")
# ウェイト合成開始
start_time = time.time()
# 各パターンのウェイトをConnectedグループのウェイトに基づいて合成
# 複数のConnectedグループに属する頂点の場合は加重平均を計算
connected_groups = [vg for vg in target_obj.vertex_groups if vg.name.startswith("Connected_")]
if connected_groups:
# 各頂点に対して処理
for vert in target_obj.data.vertices:
# コンポーネント内の頂点はスキップ(既に処理済み)
skip = False
for (new_pattern, original_pattern), components in component_patterns.items():
for component in components:
if vert.index in component:
skip = True
break
if skip:
break
if skip:
continue
# 各Connectedグループのウェイトとパターンを収集
connected_weights = {}
total_weight = 0.0
for connected_group in connected_groups:
weight = 0.0
for g in vert.groups:
if g.group == connected_group.index:
weight = g.weight
break
if weight > 0:
# グループ名からコンポーネントIDを抽出
component_id = int(connected_group.name.split('_')[1])
# 対応するパターンを見つける
for i, data in enumerate(obb_data):
if data['component_id'] == component_id:
# パターンウェイトからタプル形式に変換
pattern_tuple = tuple(sorted((k, v) for k, v in data['pattern_weights'].items() if v > 0.0001))
connected_weights[pattern_tuple] = weight
total_weight += weight
break
# ウェイトが0の場合はスキップ
if total_weight <= 0:
continue
# 各グループのウェイトを合成
combined_weights = {}
for pattern, weight in connected_weights.items():
# パターンの正規化
normalized_weight = weight / total_weight
# パターンからグループ名とウェイト値を抽出
for group_name, value in pattern:
if group_name not in combined_weights:
combined_weights[group_name] = 0.0
combined_weights[group_name] += value * normalized_weight
factor = total_weight
if total_weight > 1.0:
factor = 1.0
# 既存のウェイト値を保存
existing_weights = {}
for group_name in existing_target_groups:
if group_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[group_name]
weight = 0.0
for g in vert.groups:
if g.group == group.index:
weight = g.weight
break
existing_weights[group_name] = weight
new_weights = {}
# 既存のウェイト値を更新(factor に基づいて減衰)
for group_name, weight in existing_weights.items():
if group_name in target_obj.vertex_groups and group_name in existing_target_groups:
group = target_obj.vertex_groups[group_name]
# 既存のウェイトを (1-factor) 倍に減衰
new_weights[group_name] = weight * (1.0 - factor)
# 各パターンのウェイトを加算
for pattern, weight in connected_weights.items():
# パターンの正規化
normalized_weight = weight / total_weight
if total_weight < 1.0:
normalized_weight = weight
# パターンからグループ名とウェイト値を抽出
for group_name, value in pattern:
if group_name in target_obj.vertex_groups and group_name in existing_target_groups:
group = target_obj.vertex_groups[group_name]
# 新しいウェイト値を計算
compornent_weight = value * normalized_weight
# ウェイトを更新
new_weights[group_name] = new_weights[group_name] + compornent_weight
for group_name, weight in new_weights.items():
if weight > 1.0:
weight = 1.0
group = target_obj.vertex_groups[group_name]
group.add([vert.index], weight, 'REPLACE')
print(f"ウェイト合成時間: {time.time() - start_time:.2f}秒")
print(f"総処理時間: {time.time() - start_total:.2f}秒")
def create_vertex_neighbors_array(obj, expand_distance=0.05, sigma=0.02):
"""
各頂点の近接頂点情報を NumPy 配列形式で作成する
Parameters:
obj: 対象のメッシュオブジェクト
expand_distance: 検索範囲(メートル単位)
sigma: ガウス関数の標準偏差
Returns:
neighbors_info (np.ndarray): shape = (M, 2) のフラット配列
各行は [neighbor_idx, weight_factor]
offsets (np.ndarray): shape = (num_verts+1,)
頂点 i の近接データは neighbors_info[offsets[i]:offsets[i+1]] に格納
num_verts (int): 頂点数
"""
# 評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
num_verts = len(eval_mesh.vertices)
# 頂点のワールド座標を取得
world_coords = [eval_obj.matrix_world @ v.co for v in eval_mesh.vertices]
# KDTreeを構築
kdtree = cKDTree(world_coords)
# ガウス関数
def gaussian(distance, sigma):
return math.exp(-(distance**2) / (2 * sigma**2))
# 近傍頂点リストを作成
neighbors_list = [[] for _ in range(num_verts)]
for vert_idx, vert_world in enumerate(world_coords):
# 範囲内の頂点を検索
for idx in kdtree.query_ball_point(vert_world, expand_distance):
if idx != vert_idx:
dist = (world_coords[idx] - vert_world).length
weight_factor = gaussian(dist, sigma)
neighbors_list[vert_idx].append((idx, weight_factor))
# フラットな配列とオフセット配列を作成
# offsets[i] は i 番目頂点の近接配列が始まるインデックスを表す
offsets = np.zeros(num_verts+1, dtype=np.int64)
for i in range(num_verts):
offsets[i+1] = offsets[i] + len(neighbors_list[i])
flat_data = []
for i in range(num_verts):
flat_data.extend(neighbors_list[i])
# (neighbor_idx, weight_factor) -> NumPy 配列化
neighbors_info = np.array(flat_data, dtype=np.float64) # shape = (M, 2)
# ただし neighbor_idx は整数なので、後で int にキャストして使う
return neighbors_info, offsets, num_verts
def custom_max_vertex_group_numpy(obj, group_name, neighbors_info, offsets, num_verts,
repeat=3, weight_factor=1.0):
"""
NumPy を用いたカスタムスムージング (MAXベース) の高速実装
Parameters:
obj: 対象のメッシュオブジェクト
group_name: スムージング対象の頂点グループ名
neighbors_info: create_vertex_neighbors_array で作成した近接頂点情報フラット配列
offsets: create_vertex_neighbors_array で作成した頂点ごとのオフセット
num_verts: 頂点数
repeat: スムージングの繰り返し回数
weight_factor: 周辺頂点からの最大値に掛ける係数
"""
if group_name not in obj.vertex_groups:
print(f"頂点グループ '{group_name}' が見つかりません")
return
group_index = obj.vertex_groups[group_name].index
# 頂点ウェイトを NumPy 配列で取得
current_weights = np.zeros(num_verts, dtype=np.float64)
for v in obj.data.vertices:
w = 0.0
for g in v.groups:
if g.group == group_index:
w = g.weight
break
current_weights[v.index] = w
# スムージングを繰り返し
for _ in range(repeat):
new_weights = np.copy(current_weights)
# 各頂点ごとに近接頂点の (weight * factor) の最大値を取る
for vert_idx in range(num_verts):
start = offsets[vert_idx]
end = offsets[vert_idx+1]
if start == end:
# 近接頂点がない場合
continue
# neighbors_info[start:end, 0] -> neighbor_idx (float なので int にキャスト)
neighbor_idx = neighbors_info[start:end, 0].astype(np.int64)
dist_factors = neighbors_info[start:end, 1] # weight_dist_factor
# 周囲頂点のウェイトに距離係数を掛け合わせ、その最大値を求める
local_max = np.max(current_weights[neighbor_idx] * dist_factors)
# 現在ウェイトと比較して大きい方を適用
new_weights[vert_idx] = max(new_weights[vert_idx], local_max * weight_factor)
current_weights = new_weights
# 計算結果を頂点グループに反映 (まとめて書き戻し)
vg = obj.vertex_groups[group_name]
for vert_idx in range(num_verts):
w = current_weights[vert_idx]
if w > 1.0:
w = 1.0
vg.add([vert_idx], float(w), 'REPLACE')
def create_vertex_neighbors_list(obj, expand_distance=0.05, sigma=0.02):
"""
各頂点の近接頂点リストを作成する
Parameters:
obj: 対象のメッシュオブジェクト
expand_distance: 検索範囲(メートル単位)
sigma: ガウス関数の標準偏差
Returns:
list: 各頂点の近接頂点リスト
vert_neighbors[vert_idx] = [(neighbor_idx, weight_factor), ... ]
weight_factorはガウス関数で計算された距離に基づく重み係数
"""
# 評価済みメッシュを取得
depsgraph = bpy.context.evaluated_depsgraph_get()
eval_obj = obj.evaluated_get(depsgraph)
eval_mesh = eval_obj.data
# 頂点のワールド座標を取得
world_coords = [eval_obj.matrix_world @ v.co for v in eval_mesh.vertices]
# KDTreeを構築
kdtree = cKDTree(world_coords)
# ガウス関数
def gaussian(distance, sigma):
return math.exp(-(distance**2) / (2 * sigma**2))
# 近傍頂点リストを作成
vert_neighbors = [[] for _ in range(len(world_coords))]
for vert_idx, vert_world in enumerate(world_coords):
# 範囲内の頂点を検索
for idx in kdtree.query_ball_point(vert_world, expand_distance):
if idx != vert_idx:
dist = (world_coords[idx] - vert_world).length
weight_factor = gaussian(dist, sigma)
vert_neighbors[vert_idx].append((idx, weight_factor))
return vert_neighbors
def custom_max_vertex_group(obj, group_name, vert_neighbors, repeat=3, weight_factor=1.0):
"""
ガウス関数を用いたカスタムスムージング
Parameters:
obj: 対象のメッシュオブジェクト
group_name: スムージングする頂点グループ名
repeat: 繰り返し回数
weight_factor: ウェイトの拡張係数
"""
if group_name not in obj.vertex_groups:
print(f"頂点グループ {group_name} が見つかりません")
return
# 頂点グループのインデックスを取得
group_index = obj.vertex_groups[group_name].index
# 現在のウェイト値を取得
current_weights = {}
for vert_idx, vert in enumerate(obj.data.vertices):
weight = 0.0
for g in vert.groups:
if g.group == group_index:
weight = g.weight
break
current_weights[vert_idx] = weight
# 指定回数繰り返す
for _ in range(repeat):
new_weights = current_weights.copy()
# 各頂点に対して処理
for vert_idx, vert in enumerate(obj.data.vertices):
# 指定距離内の頂点を取得
nearby_verts = vert_neighbors[vert_idx]
if not nearby_verts:
continue
weight_max = 0.0
for idx, weight_dist_factor in nearby_verts:
weight_max = max(weight_max, current_weights[idx] * weight_dist_factor)
# 現在の頂点の重みも考慮
current_vert_weight = current_weights[vert_idx]
# 新しいウェイト値を計算(現在の値と周囲の加重平均の線形補間)
new_weights[vert_idx] = max(current_vert_weight, weight_max * weight_factor)
# 新しいウェイト値を現在のウェイト値として設定
current_weights = new_weights
# 最終的なウェイト値を頂点グループに適用
group = obj.vertex_groups[group_name]
for vert_idx, weight in current_weights.items():
if weight > 1.0:
weight = 1.0
group.add([vert_idx], weight, 'REPLACE')
# ウェイトパターンの類似性を計算する関数
def calculate_weight_pattern_similarity(weights1, weights2):
"""
2つのウェイトパターン間の類似性を計算する
Parameters:
weights1: 1つ目のウェイトパターン {group_name: weight}
weights2: 2つ目のウェイトパターン {group_name: weight}
Returns:
float: 類似度(0.0〜1.0、1.0が完全一致)
"""
# 両方のパターンに存在するグループを取得
all_groups = set(weights1.keys()) | set(weights2.keys())
if not all_groups:
return 0.0
# 各グループのウェイト差の合計を計算
total_diff = 0.0
for group in all_groups:
w1 = weights1.get(group, 0.0)
w2 = weights2.get(group, 0.0)
total_diff += abs(w1 - w2)
# 正規化(グループ数で割る)
normalized_diff = total_diff / len(all_groups)
# 類似度に変換(差が小さいほど類似度が高い)
similarity = 1.0 - min(normalized_diff, 1.0)
return similarity
def calculate_component_size(coords):
"""
コンポーネントのサイズを計算する
Parameters:
coords: 頂点座標のリスト
Returns:
float: コンポーネントのサイズ(直径または最大の辺の長さ)
"""
if len(coords) < 2:
return 0.0
# バウンディングボックスを計算
min_x = min(co.x for co in coords)
max_x = max(co.x for co in coords)
min_y = min(co.y for co in coords)
max_y = max(co.y for co in coords)
min_z = min(co.z for co in coords)
max_z = max(co.z for co in coords)
# バウンディングボックスの対角線の長さを計算
diagonal = ((max_x - min_x)**2 + (max_y - min_y)**2 + (max_z - min_z)**2)**0.5
return diagonal
def cluster_components_by_adaptive_distance(component_coords, component_sizes):
"""
コンポーネント間の距離に基づいてクラスタリングする(サイズに応じた適応的な閾値を使用)
Parameters:
component_coords: コンポーネントインデックスをキー、頂点座標のリストを値とする辞書
component_sizes: コンポーネントインデックスをキー、サイズを値とする辞書
Returns:
list: クラスターのリスト(各クラスターはコンポーネントインデックスのリスト)
"""
if not component_coords:
return []
# 各コンポーネントの中心点を計算
centers = {}
for comp_idx, coords in component_coords.items():
if coords:
center = Vector((0, 0, 0))
for co in coords:
center += co
center /= len(coords)
centers[comp_idx] = center
# クラスターのリスト(初期状態では各コンポーネントが独立したクラスター)
clusters = [[comp_idx] for comp_idx in centers.keys()]
# コンポーネントの平均サイズを計算
if component_sizes:
average_size = sum(component_sizes.values()) / len(component_sizes)
else:
average_size = 0.1 # デフォルト値
# 最小閾値と最大閾値を設定
min_threshold = 0.1
max_threshold = 1.0
# クラスターをマージする
merged = True
while merged:
merged = False
# 各クラスターペアをチェック
for i in range(len(clusters)):
if i >= len(clusters): # クラスター数が変わった場合の安全チェック
break
for j in range(i + 1, len(clusters)):
if j >= len(clusters): # クラスター数が変わった場合の安全チェック
break
# 各クラスター内のコンポーネント間の最小距離と関連するサイズを計算
min_distance = float('inf')
comp_i_size = 0.0
comp_j_size = 0.0
for comp_i in clusters[i]:
for comp_j in clusters[j]:
if comp_i in centers and comp_j in centers:
dist = (centers[comp_i] - centers[comp_j]).length
if dist < min_distance:
min_distance = dist
comp_i_size = component_sizes.get(comp_i, average_size)
comp_j_size = component_sizes.get(comp_j, average_size)
# 2つのコンポーネントのサイズに基づいて適応的な閾値を計算
# より大きいコンポーネントのサイズの一定割合を使用
adaptive_threshold = max(comp_i_size, comp_j_size) * 0.5
# 閾値の範囲を制限
adaptive_threshold = max(min_threshold, min(max_threshold, adaptive_threshold))
# 距離が閾値以下ならクラスターをマージ
if min_distance <= adaptive_threshold:
clusters[i].extend(clusters[j])
clusters.pop(j)
merged = True
break
if merged:
break
return clusters
# def normalize_overlapping_vertices_weights(clothing_meshes, base_avatar_data, distance_threshold=0.0001):
# """
# ワールド座標上でほぼ重なっている頂点のウェイトをそろえる
# 1段階細分化したメッシュで処理を行い、結果を元のメッシュに適用する
# Parameters:
# clothing_meshes: 処理対象の衣装メッシュのリスト
# base_avatar_data: ベースアバターデータ
# distance_threshold: 重なっていると判定する距離の閾値
# """
# print("Normalizing weights for overlapping vertices using subdivision approach...")
# # チェック対象の頂点グループを取得
# target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# # 処理対象のメッシュをフィルタリング(InpaintMaskを持つメッシュのみ)
# valid_meshes = []
# for mesh in clothing_meshes:
# if "InpaintMask" in mesh.vertex_groups:
# valid_meshes.append(mesh)
# if not valid_meshes:
# print("No meshes with InpaintMask found, skipping normalization")
# return
# # 各メッシュに対して処理
# for mesh_obj in valid_meshes:
# # 元のメッシュを複製して細分化
# bpy.ops.object.select_all(action='DESELECT')
# mesh_obj.select_set(True)
# bpy.context.view_layer.objects.active = mesh_obj
# bpy.ops.object.duplicate(linked=False)
# subdiv_obj = bpy.context.active_object
# subdiv_obj.name = f"{mesh_obj.name}_TempSubdiv"
# # 細分化モディファイアを追加
# subdiv_mod = subdiv_obj.modifiers.new(name="TempSubdivision", type='SUBSURF')
# subdiv_mod.levels = 1
# subdiv_mod.render_levels = 1
# subdiv_mod.subdivision_type = 'SIMPLE' # Simpleモードを使用
# # モディファイアを適用
# apply_modifiers_keep_shapekeys_with_temp(subdiv_obj)
# bpy.context.view_layer.objects.active = mesh_obj
# # 頂点グループをコピー
# for group_name in target_groups:
# if group_name in mesh_obj.vertex_groups and group_name not in subdiv_obj.vertex_groups:
# subdiv_obj.vertex_groups.new(name=group_name)
# if "InpaintMask" in mesh_obj.vertex_groups and "InpaintMask" not in subdiv_obj.vertex_groups:
# subdiv_obj.vertex_groups.new(name="InpaintMask")
# if "Rigid" in mesh_obj.vertex_groups and "Rigid" not in subdiv_obj.vertex_groups:
# subdiv_obj.vertex_groups.new(name="Rigid")
# # 評価済みデータを取得
# depsgraph = bpy.context.evaluated_depsgraph_get()
# eval_subdiv_obj = subdiv_obj.evaluated_get(depsgraph)
# eval_subdiv_mesh = eval_subdiv_obj.data
# # BMeshを作成して頂点のエッジ情報を取得
# bm = bmesh.new()
# bm.from_mesh(eval_subdiv_mesh)
# bm.verts.ensure_lookup_table()
# bm.edges.ensure_lookup_table()
# # 頂点データを収集
# all_vertices = []
# for vert_idx, vert in enumerate(eval_subdiv_mesh.vertices):
# # Rigid頂点グループのウェイトをチェック
# rigid_weight = 0.0
# if "Rigid" in subdiv_obj.vertex_groups:
# rigid_group = subdiv_obj.vertex_groups["Rigid"]
# for g in vert.groups:
# if g.group == rigid_group.index:
# rigid_weight = g.weight
# break
# # Rigid頂点グループのウェイトが0より大きい頂点は無視
# if rigid_weight > 0:
# continue
# # InpaintMaskのウェイトを取得
# inpaint_weight = 0.0
# if "InpaintMask" in subdiv_obj.vertex_groups:
# inpaint_group = subdiv_obj.vertex_groups["InpaintMask"]
# for g in vert.groups:
# if g.group == inpaint_group.index:
# inpaint_weight = g.weight
# break
# # 頂点のワールド座標を計算
# world_pos = subdiv_obj.matrix_world @ vert.co
# # 対象グループのウェイトを収集
# weights = {}
# for group_name in target_groups:
# if group_name in subdiv_obj.vertex_groups:
# group = subdiv_obj.vertex_groups[group_name]
# for g in vert.groups:
# if g.group == group.index:
# weights[group_name] = g.weight
# break
# # 頂点に接続するエッジの方向ベクトルを収集
# edge_directions = []
# bm_vert = bm.verts[vert_idx]
# for edge in bm_vert.link_edges:
# other_vert = edge.other_vert(bm_vert)
# direction = (other_vert.co - bm_vert.co).normalized()
# edge_directions.append(direction)
# # 頂点データを保存
# all_vertices.append({
# 'vert_idx': vert_idx,
# 'world_pos': world_pos,
# 'weights': weights,
# 'inpaint_weight': inpaint_weight,
# 'edge_directions': edge_directions
# })
# # KDTreeを構築して近接頂点を効率的に検索
# positions = [v['world_pos'] for v in all_vertices]
# kdtree = KDTree(len(positions))
# for i, pos in enumerate(positions):
# kdtree.insert(pos, i)
# kdtree.balance()
# # 重なっている頂点を検出してウェイトを揃える
# processed = set() # 処理済みの頂点インデックスを記録
# normalized_weights = {} # 正規化されたウェイト {vert_idx: {group_name: weight}}
# for i, vert_data in enumerate(all_vertices):
# if i in processed:
# continue
# # 近接頂点を検索
# overlapping_indices = []
# for (co, idx, dist) in kdtree.find_range(vert_data['world_pos'], distance_threshold):
# if idx != i and idx not in processed: # 自分自身と処理済みの頂点は除外
# # エッジ方向の類似性をチェック
# if check_edge_direction_similarity(vert_data['edge_directions'], all_vertices[idx]['edge_directions']):
# overlapping_indices.append(idx)
# if not overlapping_indices:
# continue
# # 重なっている頂点グループを含める
# overlapping_indices.append(i)
# processed.add(i)
# # 重なっている頂点をInpaintMaskのウェイトでソート
# overlapping_verts = [all_vertices[idx] for idx in overlapping_indices]
# overlapping_verts.sort(key=lambda x: x['inpaint_weight'])
# # InpaintMaskのウェイトが最小の頂点のウェイトを使用
# reference_vert = overlapping_verts[0]
# reference_weights = reference_vert['weights']
# min_inpaint_weight = reference_vert['inpaint_weight']
# same_weight_verts = [v for v in overlapping_verts if abs(v['inpaint_weight'] - min_inpaint_weight) < 0.0001]
# if len(same_weight_verts) > 1:
# # 平均ウェイトを計算
# avg_weights = {}
# for group_name in target_groups:
# weights_sum = 0.0
# count = 0
# for v in same_weight_verts:
# if group_name in v['weights']:
# weights_sum += v['weights'][group_name]
# count += 1
# if count > 0:
# avg_weights[group_name] = weights_sum / count
# reference_weights = avg_weights
# # すべての重なっている頂点に参照ウェイトを適用
# for vert in overlapping_verts:
# vert_idx = vert['vert_idx']
# normalized_weights[vert_idx] = reference_weights.copy()
# processed.add(overlapping_indices[overlapping_verts.index(vert)])
# # BMeshを解放
# bm.free()
# # 細分化メッシュの頂点ウェイトを更新
# for vert_idx, weights in normalized_weights.items():
# # 既存のウェイトをクリア
# for group_name in target_groups:
# if group_name in subdiv_obj.vertex_groups:
# try:
# subdiv_obj.vertex_groups[group_name].remove([vert_idx])
# except RuntimeError:
# pass
# # 新しいウェイトを適用
# for group_name, weight in weights.items():
# if weight > 0:
# if group_name not in subdiv_obj.vertex_groups:
# subdiv_obj.vertex_groups.new(name=group_name)
# subdiv_obj.vertex_groups[group_name].add([vert_idx], weight, 'REPLACE')
# # 細分化メッシュから元のメッシュに結果を転送
# # KDTreeを使用して元のメッシュの各頂点に最も近い細分化メッシュの頂点を見つける
# original_verts_world = [mesh_obj.matrix_world @ v.co for v in mesh_obj.data.vertices]
# subdiv_verts_world = [subdiv_obj.matrix_world @ v.co for v in subdiv_obj.data.vertices]
# subdiv_kdtree = KDTree(len(subdiv_verts_world))
# for i, pos in enumerate(subdiv_verts_world):
# subdiv_kdtree.insert(pos, i)
# subdiv_kdtree.balance()
# # 元のメッシュの各頂点に対して最も近い細分化メッシュの頂点を見つけてウェイトを転送
# for i, orig_pos in enumerate(original_verts_world):
# # Rigid頂点グループのウェイトをチェック
# rigid_weight = 0.0
# if "Rigid" in mesh_obj.vertex_groups:
# rigid_group = mesh_obj.vertex_groups["Rigid"]
# rigid_group_index = rigid_group.index
# for g in mesh_obj.data.vertices[i].groups:
# if g.group == rigid_group_index:
# rigid_weight = g.weight
# break
# # Rigid頂点グループのウェイトが0より大きい頂点は無視
# if rigid_weight > 0:
# continue
# # 最も近い細分化メッシュの頂点を見つける
# co, subdiv_idx, dist = subdiv_kdtree.find(orig_pos)
# # 距離が閾値以内の場合のみウェイトを転送
# if dist <= distance_threshold * 2: # 少し大きめの閾値を使用
# # 細分化メッシュの頂点がnormalized_weightsに含まれているか確認
# if subdiv_idx in normalized_weights:
# # 既存のウェイトをクリア
# for group_name in target_groups:
# if group_name in mesh_obj.vertex_groups:
# group = mesh_obj.vertex_groups[group_name]
# group_index = group.index
# # 頂点がこのグループに存在するか確認
# vertex_in_group = False
# for g in mesh_obj.data.vertices[i].groups:
# if g.group == group_index:
# vertex_in_group = True
# break
# # 頂点がグループに存在する場合のみ削除を試みる
# if vertex_in_group:
# try:
# group.remove([i])
# except RuntimeError:
# pass
# # 新しいウェイトを適用
# for group_name, weight in normalized_weights[subdiv_idx].items():
# if weight > 0:
# if group_name not in mesh_obj.vertex_groups:
# mesh_obj.vertex_groups.new(name=group_name)
# mesh_obj.vertex_groups[group_name].add([i], weight, 'REPLACE')
# # 一時的な細分化メッシュを削除
# bpy.data.objects.remove(subdiv_obj, do_unlink=True)
# print(f"Normalized weights using subdivision approach")
def subdivide_selected_vertices(obj_name, vertex_indices, level=2):
"""
特定のメッシュの選択された頂点を細分化する
引数:
obj_name (str): 操作対象のオブジェクト名
vertex_indices (list): 選択する頂点のインデックスリスト
cuts (int): 細分化の分割数
"""
# アクティブオブジェクトの設定
bpy.ops.object.mode_set(mode='OBJECT')
obj = bpy.data.objects.get(obj_name)
if obj is None:
print(f"オブジェクト '{obj_name}' が見つかりません")
return
# オブジェクトを選択してアクティブに
bpy.ops.object.select_all(action='DESELECT')
obj.select_set(True)
bpy.context.view_layer.objects.active = obj
# 編集モードに切り替え
bpy.ops.object.mode_set(mode='EDIT')
# bmeshを取得
me = obj.data
bm = bmesh.from_edit_mesh(me)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
# すべての選択を解除
for v in bm.verts:
v.select = False
for e in bm.edges:
e.select = False
for f in bm.faces:
f.select = False
# 指定された頂点を選択
for idx in vertex_indices:
if idx < len(bm.verts):
bm.verts[idx].select = True
# 選択された頂点「同士」で構成されるエッジのみを選択
# つまり、エッジの両端の頂点が両方とも選択された頂点リストに含まれる場合のみ選択
selected_verts = set(bm.verts[idx] for idx in vertex_indices if idx < len(bm.verts))
connected_edges = []
for e in bm.edges:
# エッジの両端の頂点が両方とも選択された頂点セットに含まれる場合のみ選択
if e.verts[0] in selected_verts and e.verts[1] in selected_verts:
e.select = True
connected_edges.append(e)
# 変更を適用
bmesh.update_edit_mesh(me)
# 細分化操作
if connected_edges:
for _ in range(level):
bpy.ops.mesh.subdivide(number_cuts=1)
print(f"{len(connected_edges)} 個のエッジが細分化されました")
else:
print("選択された頂点間にエッジが見つかりませんでした")
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
obj.data.update()
def normalize_overlapping_vertices_weights(clothing_meshes, base_avatar_data, overlap_attr_name="Overlapped", world_pos_attr_name="OriginalWorldPosition"):
"""
Overlapped属性が1となる頂点で構成される面およびエッジのみを対象に
重なっている頂点のウェイトを正規化する
Parameters:
clothing_meshes: 処理対象の衣装メッシュのリスト
base_avatar_data: ベースアバターデータ
overlap_attr_name: 重なり検出フラグの属性名
world_pos_attr_name: ワールド座標が保存された属性名
"""
print("Normalizing weights for overlapping vertices using custom attributes...")
original_active = bpy.context.view_layer.objects.active
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# 処理対象のメッシュをフィルタリング(必要な属性を持つメッシュのみ)
valid_meshes = []
for mesh in clothing_meshes:
if (overlap_attr_name in mesh.data.attributes and
world_pos_attr_name in mesh.data.attributes and
"InpaintMask" in mesh.vertex_groups):
valid_meshes.append(mesh)
if not valid_meshes:
print(f"警告: {overlap_attr_name}と{world_pos_attr_name}属性を持つメッシュが見つかりません。処理をスキップします。")
return
# 各メッシュに対して処理
for mesh_obj in valid_meshes:
# オブジェクトを選択して編集モードに入る
bpy.ops.object.select_all(action='DESELECT')
mesh_obj.select_set(True)
bpy.context.view_layer.objects.active = mesh_obj
# 元のメッシュを複製して処理用オブジェクトを作成
bpy.ops.object.duplicate(linked=False)
work_obj = bpy.context.active_object
work_obj.name = f"{mesh_obj.name}_OverlapWork"
# カスタム属性を取得
overlap_attr = mesh_obj.data.attributes[overlap_attr_name]
world_pos_attr = mesh_obj.data.attributes[world_pos_attr_name]
# 重なっている頂点(属性値が1.0)を特定
overlapping_verts_ids = [i for i, data in enumerate(overlap_attr.data) if data.value > 0.9999]
if not overlapping_verts_ids:
print(f"警告: {mesh_obj.name}に重なっている頂点が見つかりません。処理をスキップします。")
bpy.data.objects.remove(work_obj, do_unlink=True)
continue
subdivide_selected_vertices(work_obj.name, overlapping_verts_ids, level=2)
subdiv_overlap_attr = work_obj.data.attributes[overlap_attr_name]
subdiv_overlapping_verts_ids = [i for i, data in enumerate(subdiv_overlap_attr.data) if data.value > 0.9999]
subdiv_world_pos_attr = work_obj.data.attributes[world_pos_attr_name]
# KDTreeを構築して近接頂点を効率的に検索
subdiv_original_world_positions = []
for vert_idx in subdiv_overlapping_verts_ids:
world_pos = Vector(subdiv_world_pos_attr.data[vert_idx].vector)
subdiv_original_world_positions.append(world_pos)
# 重なっている頂点をグループ化(同じ位置の頂点をまとめる)
distance_threshold = 0.0001 # 重なりの閾値
overlapping_groups = {}
for orig_idx, world_pos in zip(subdiv_overlapping_verts_ids, subdiv_original_world_positions):
found_group = False
for group_id, (group_pos, members) in overlapping_groups.items():
if (world_pos - group_pos).length <= distance_threshold:
members.append(orig_idx)
found_group = True
break
if not found_group:
group_id = len(overlapping_groups)
overlapping_groups[group_id] = (world_pos, [orig_idx])
# 各グループで重なっている頂点の基準ウェイトを計算
reference_weights = {}
vert_weights = {}
for group_id, (group_pos, member_indices) in overlapping_groups.items():
# InpaintMaskのウェイトを取得
member_inpaint_weights = []
for idx in member_indices:
inpaint_weight = 0.0
if "InpaintMask" in work_obj.vertex_groups:
inpaint_group = work_obj.vertex_groups["InpaintMask"]
for g in work_obj.data.vertices[idx].groups:
if g.group == inpaint_group.index:
inpaint_weight = g.weight
break
member_inpaint_weights.append((idx, inpaint_weight))
# InpaintMaskのウェイトでソート
member_inpaint_weights.sort(key=lambda x: x[1])
# InpaintMaskのウェイトが最小の頂点を基準にする
if member_inpaint_weights:
reference_idx = member_inpaint_weights[0][0]
ref_weights = {}
# 頂点グループのウェイトを取得
for group_name in target_groups:
if group_name in work_obj.vertex_groups:
group = work_obj.vertex_groups[group_name]
weight = 0.0
for g in work_obj.data.vertices[reference_idx].groups:
if g.group == group.index:
weight = g.weight
break
ref_weights[group_name] = weight
reference_weights[group_id] = ref_weights
# 同じInpaintMaskウェイトの頂点がある場合は平均値を計算
min_inpaint_weight = member_inpaint_weights[0][1]
same_weight_vert_ids = [v[0] for v in member_inpaint_weights if abs(v[1] - min_inpaint_weight) < 0.0001]
same_weight_verts = [work_obj.data.vertices[idx] for idx in same_weight_vert_ids]
if len(same_weight_verts) > 1:
# 平均ウェイトを計算
avg_weights = {}
for group_name in target_groups:
if group_name in work_obj.vertex_groups:
weights_sum = 0.0
count = 0
for v in same_weight_verts:
weight = 0.0
for g in v.groups:
if g.group == mesh_obj.vertex_groups[group_name].index:
weight = g.weight
break
if weight > 0:
weights_sum += weight
count += 1
if count > 0:
avg_weights[group_name] = weights_sum / count
reference_weights[group_id] = avg_weights
# すべての重なっている頂点に参照ウェイトを適用
for vert_idx in member_indices:
vert_weights[vert_idx] = reference_weights[group_id].copy()
# 元のメッシュに戻る
bpy.ops.object.select_all(action='DESELECT')
mesh_obj.select_set(True)
bpy.context.view_layer.objects.active = mesh_obj
# 頂点グループを更新
updated_count = 0
# 細分化メッシュの頂点と元のメッシュの頂点をマッピング
for orig_idx in overlapping_verts_ids:
# 元の頂点のワールド座標を取得
orig_world_pos = Vector(world_pos_attr.data[orig_idx].vector)
# 最も近い細分化メッシュの頂点を見つける
closest_idx = None
min_dist = float('inf')
for subdiv_idx, subdiv_pos in zip(subdiv_overlapping_verts_ids, subdiv_original_world_positions):
dist = (orig_world_pos - subdiv_pos).length
if dist < min_dist:
min_dist = dist
closest_idx = subdiv_idx
# 一定距離以内の場合、ウェイトを適用
if closest_idx is not None and closest_idx in vert_weights and min_dist < distance_threshold:
# 頂点グループのウェイトを更新
for group_name, weight in vert_weights[closest_idx].items():
if group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups[group_name].add([orig_idx], weight, 'REPLACE')
updated_count += 1
# 作業用オブジェクトを削除
bpy.data.objects.remove(work_obj, do_unlink=True)
print(f"{mesh_obj.name}の{updated_count}個の頂点のウェイトを正規化しました。")
bpy.context.view_layer.objects.active = original_active
print("重なっている頂点のウェイト正規化が完了しました。")
def normalize_weights_from_overlapping_uvmap(clothing_meshes, base_avatar_data, uvmap_name="OverlappingVertices"):
"""
create_overlapping_vertices_uvmapで保存したUVマップから重なっている頂点を取り出し、
重なっている頂点の頂点ウェイトを正規化する
Parameters:
clothing_meshes: 処理対象の衣装メッシュのリスト
base_avatar_data: ベースアバターデータ
uvmap_name: 重なっている頂点情報を格納したUVマップの名前
"""
print(f"Normalizing weights from overlapping vertices UV map: {uvmap_name}")
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# 処理対象のメッシュをフィルタリング(指定されたUVマップを持つメッシュのみ)
valid_meshes = []
for mesh in clothing_meshes:
if uvmap_name in mesh.data.uv_layers and "InpaintMask" in mesh.vertex_groups:
valid_meshes.append(mesh)
if not valid_meshes:
print(f"警告: {uvmap_name}とInpaintMaskを持つメッシュが見つかりません。処理をスキップします。")
return
# 各メッシュに対して処理
for mesh_obj in valid_meshes:
all_vertices = {}
for vert_idx, vert in enumerate(mesh_obj.data.vertices):
# Rigid頂点グループのウェイトをチェック
rigid_weight = 0.0
if "Rigid" in mesh_obj.vertex_groups:
rigid_group = mesh_obj.vertex_groups["Rigid"]
for g in vert.groups:
if g.group == rigid_group.index:
rigid_weight = g.weight
break
# InpaintMaskのウェイトを取得
inpaint_weight = 0.0
if "InpaintMask" in mesh_obj.vertex_groups:
inpaint_group = mesh_obj.vertex_groups["InpaintMask"]
for g in vert.groups:
if g.group == inpaint_group.index:
inpaint_weight = g.weight
break
# 頂点の座標
pos = vert.co
# 対象グループのウェイトを収集
weights = {}
for group_name in target_groups:
if group_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[group_name]
for g in vert.groups:
if g.group == group.index:
weights[group_name] = g.weight
break
# 頂点データを保存
all_vertices[vert_idx] = {
'pos': pos,
'weights': weights,
'rigid_weight': rigid_weight,
'inpaint_weight': inpaint_weight
}
# UVマップからオーバーラップ情報を取得
uv_layer = mesh_obj.data.uv_layers[uvmap_name]
# UVマップの値をグループ化して重なっている頂点を特定
overlapping_groups = {}
# UVマップの値を収集
for poly in mesh_obj.data.polygons:
for loop_idx in poly.loop_indices:
vert_idx = mesh_obj.data.loops[loop_idx].vertex_index
uv = uv_layer.data[loop_idx].uv
# UV座標が原点の場合はスキップ
if abs(uv.x) < 0.0001 and abs(uv.y) < 0.0001:
continue
# UVの値を丸めてハッシュ可能なキーにする
uv_key = (round(uv.x, 5), round(uv.y, 5))
if uv_key not in overlapping_groups:
overlapping_groups[uv_key] = []
if vert_idx not in overlapping_groups[uv_key]:
overlapping_groups[uv_key].append(vert_idx)
# 重なっている頂点グループのみを抽出(2つ以上の頂点を含むグループ)
valid_groups = {k: v for k, v in overlapping_groups.items() if len(v) >= 2}
if not valid_groups:
print(f"警告: {mesh_obj.name}に重なっている頂点が見つかりません。処理をスキップします。")
continue
print(f"{mesh_obj.name}で{len(valid_groups)}個の重なり頂点グループを処理します。")
# 各重なりグループに対して処理
for uv_key, vert_indices in valid_groups.items():
# 重なっている頂点をInpaintMaskのウェイトでソート
overlapping_verts = [all_vertices[idx] for idx in vert_indices]
overlapping_verts.sort(key=lambda x: x['inpaint_weight'])
# InpaintMaskのウェイトが最小の頂点のウェイトを使用
reference_vert = overlapping_verts[0]
reference_weights = reference_vert['weights']
# 同じInpaintMaskウェイトの頂点がある場合は平均値を計算
min_inpaint_weight = reference_vert['inpaint_weight']
same_weight_verts = [v for v in overlapping_verts if abs(v['inpaint_weight'] - min_inpaint_weight) < 0.0001]
if len(same_weight_verts) > 1:
# 平均ウェイトを計算
avg_weights = {}
for group_name in target_groups:
weights_sum = 0.0
count = 0
for v in same_weight_verts:
if group_name in v['weights']:
weights_sum += v['weights'][group_name]
count += 1
if count > 0:
avg_weights[group_name] = weights_sum / count
reference_weights = avg_weights
# ウェイトを各頂点に適用
for vert_idx in vert_indices:
for group_name, new_weight in reference_weights.items():
if new_weight > 0:
mesh_obj.vertex_groups[group_name].add([vert_idx], new_weight, 'REPLACE')
else:
mesh_obj.vertex_groups[group_name].add([vert_idx], 0.0, 'REPLACE')
print(f"重なっている頂点のウェイト正規化が完了しました。")
def check_edge_direction_similarity(directions1, directions2, angle_threshold=3.0):
"""
2つの頂点のエッジ方向セットが類似しているかをチェックする
Parameters:
directions1: 1つ目の頂点のエッジ方向ベクトルのリスト
directions2: 2つ目の頂点のエッジ方向ベクトルのリスト
angle_threshold: 類似と判断する角度の閾値(度)
Returns:
bool: 少なくとも1つのエッジ方向が類似している場合はTrue
"""
# 孤立頂点(エッジがない)の場合はFalseを返す
if not directions1 or not directions2:
return False
# 角度の閾値をラジアンに変換
angle_threshold_rad = math.radians(angle_threshold)
# 各方向の組み合わせをチェック
for dir1 in directions1:
for dir2 in directions2:
# 2つの方向ベクトル間の角度を計算
dot_product = dir1.dot(dir2)
# 内積が1を超えることがあるため、クランプする
dot_product = max(min(dot_product, 1.0), -1.0)
angle = math.acos(dot_product)
# 角度が閾値以下、または180度から閾値を引いた値以上(逆方向も考慮)
if angle <= angle_threshold_rad or angle >= (math.pi - angle_threshold_rad):
return True
return False
def process_humanoid_vertex_groups(mesh_obj: bpy.types.Object, clothing_armature: bpy.types.Object, base_avatar_data: dict, clothing_avatar_data: dict) -> None:
"""
衣装メッシュのHumanoidボーン頂点グループを処理
- Humanoidボーン名を素体アバターデータのものに変換
- 補助ボーンの頂点グループを追加
- 条件を満たす場合はOptional Humanoidボーンの頂点グループを追加
"""
# Get bone names from clothing armature
clothing_bone_names = set(bone.name for bone in clothing_armature.data.bones)
# Humanoidボーン名のマッピングを作成
base_humanoid_to_bone = {bone_map["humanoidBoneName"]: bone_map["boneName"]
for bone_map in base_avatar_data["humanoidBones"]}
clothing_humanoid_to_bone = {bone_map["humanoidBoneName"]: bone_map["boneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
clothing_bone_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
# 補助ボーンのマッピングを作成
auxiliary_bones = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
if humanoid_bone in base_humanoid_to_bone:
auxiliary_bones[base_humanoid_to_bone[humanoid_bone]] = aux_set["auxiliaryBones"]
# 既存の頂点グループ名を取得
existing_groups = set(vg.name for vg in mesh_obj.vertex_groups)
# 名前変更が必要なグループを特定
groups_to_rename = {}
for group in mesh_obj.vertex_groups:
if group.name in clothing_bone_to_humanoid:
humanoid_name = clothing_bone_to_humanoid[group.name]
if humanoid_name in base_humanoid_to_bone:
base_bone_name = base_humanoid_to_bone[humanoid_name]
groups_to_rename[group.name] = base_bone_name
# グループ名を変更
for old_name, new_name in groups_to_rename.items():
if old_name in mesh_obj.vertex_groups:
group = mesh_obj.vertex_groups[old_name]
group_index = group.index
# 頂点ごとのウェイトを保存
weights = {}
for vert in mesh_obj.data.vertices:
for g in vert.groups:
if g.group == group_index:
weights[vert.index] = g.weight
break
# グループ名を変更
group.name = new_name
# 補助ボーンの頂点グループを追加
if new_name in auxiliary_bones:
# 補助ボーンの頂点グループを作成
for aux_bone in auxiliary_bones[new_name]:
if aux_bone not in existing_groups:
mesh_obj.vertex_groups.new(name=aux_bone)
existing_groups = set(vg.name for vg in mesh_obj.vertex_groups)
breast_bones_dont_exist = 'LeftBreast' not in clothing_humanoid_to_bone and 'RightBreast' not in clothing_humanoid_to_bone
# Process each humanoid bone from base avatar
for humanoid_name, bone_name in base_humanoid_to_bone.items():
# Skip if bone exists in clothing armature
if bone_name in existing_groups:
continue
should_add_optional_humanoid_bone = False
# Condition 1: Chest exists in clothing, UpperChest missing in clothing but exists in base
if (humanoid_name == "UpperChest" and
"Chest" in clothing_humanoid_to_bone and
base_humanoid_to_bone["Chest"] in existing_groups and
"UpperChest" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 2: LeftLowerLeg exists in clothing, LeftFoot missing in clothing but exists in base
elif (humanoid_name == "LeftFoot" and
"LeftLowerLeg" in clothing_humanoid_to_bone and
base_humanoid_to_bone["LeftLowerLeg"] in existing_groups and
"LeftFoot" not in clothing_humanoid_to_bone and
"LeftFoot" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 2: RightLowerLeg exists in clothing, RightFoot missing in clothing but exists in base
elif (humanoid_name == "RightFoot" and
"RightLowerLeg" in clothing_humanoid_to_bone and
base_humanoid_to_bone["RightLowerLeg"] in existing_groups and
"RightFoot" not in clothing_humanoid_to_bone and
"RightFoot" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 3: LeftLowerLeg or LeftFoot exists in clothing, LeftToe missing in clothing but exists in base
elif (humanoid_name == "LeftToe" and
(("LeftLowerLeg" in clothing_humanoid_to_bone and base_humanoid_to_bone["LeftLowerLeg"] in existing_groups) or
("LeftFoot" in clothing_humanoid_to_bone and base_humanoid_to_bone["LeftFoot"] in existing_groups)) and
"LeftToe" not in clothing_humanoid_to_bone and
"LeftToe" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 3: RightLowerLeg or RightFoot exists in clothing, RightToe missing in clothing but exists in base
elif (humanoid_name == "RightToe" and
(("RightLowerLeg" in clothing_humanoid_to_bone and base_humanoid_to_bone["RightLowerLeg"] in existing_groups) or
("RightFoot" in clothing_humanoid_to_bone and base_humanoid_to_bone["RightFoot"] in existing_groups)) and
"RightToe" not in clothing_humanoid_to_bone and
"RightToe" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 4: LeftShoulder exists in clothing, LeftUpperArm exists in base but not in clothing
elif (humanoid_name == "LeftUpperArm" and
"LeftShoulder" in clothing_humanoid_to_bone and
base_humanoid_to_bone["LeftShoulder"] in existing_groups and
"LeftUpperArm" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 4: RightShoulder exists in clothing, RightUpperArm exists in base but not in clothing
elif (humanoid_name == "RightUpperArm" and
"RightShoulder" in clothing_humanoid_to_bone and
base_humanoid_to_bone["RightShoulder"] in existing_groups and
"RightUpperArm" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 5: LeftBreast exists in clothing, breast bones don't exist in clothing, Chest or UpperChest exists in base
elif (humanoid_name == "LeftBreast" and breast_bones_dont_exist and
(base_humanoid_to_bone["Chest"] in existing_groups or base_humanoid_to_bone["UpperChest"] in existing_groups) and
"LeftBreast" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
# Condition 5: RightBreast exists in clothing, breast bones don't exist in clothing, Chest or UpperChest exists in base
elif (humanoid_name == "RightBreast" and breast_bones_dont_exist and
(base_humanoid_to_bone["Chest"] in existing_groups or base_humanoid_to_bone["UpperChest"] in existing_groups) and
"RightBreast" in base_humanoid_to_bone):
should_add_optional_humanoid_bone = True
if should_add_optional_humanoid_bone:
print(f"Adding optional humanoid bone group: {humanoid_name} ({bone_name})")
if bone_name not in existing_groups:
mesh_obj.vertex_groups.new(name=bone_name)
else:
print(f"Optional humanoid bone group already exists: {bone_name}")
# 補助ボーンの頂点グループを追加
if bone_name in auxiliary_bones:
# 補助ボーンの頂点グループを作成
for aux_bone in auxiliary_bones[bone_name]:
if aux_bone not in existing_groups:
mesh_obj.vertex_groups.new(name=aux_bone)
def store_armature_modifier_settings(obj):
"""Armatureモディファイアの設定を保存"""
armature_settings = []
for modifier in obj.modifiers:
if modifier.type == 'ARMATURE':
settings = {
'name': modifier.name,
'object': modifier.object,
'vertex_group': modifier.vertex_group,
'invert_vertex_group': modifier.invert_vertex_group,
'use_vertex_groups': modifier.use_vertex_groups,
'use_bone_envelopes': modifier.use_bone_envelopes,
'use_deform_preserve_volume': modifier.use_deform_preserve_volume,
'use_multi_modifier': modifier.use_multi_modifier,
'show_viewport': modifier.show_viewport,
'show_render': modifier.show_render,
}
armature_settings.append(settings)
return armature_settings
def restore_armature_modifier(obj, settings):
"""Armatureモディファイアを復元"""
for modifier_settings in settings:
modifier = obj.modifiers.new(name=modifier_settings['name'], type='ARMATURE')
modifier.object = modifier_settings['object']
modifier.vertex_group = modifier_settings['vertex_group']
modifier.invert_vertex_group = modifier_settings['invert_vertex_group']
modifier.use_vertex_groups = modifier_settings['use_vertex_groups']
modifier.use_bone_envelopes = modifier_settings['use_bone_envelopes']
modifier.use_deform_preserve_volume = modifier_settings['use_deform_preserve_volume']
modifier.use_multi_modifier = modifier_settings['use_multi_modifier']
modifier.show_viewport = modifier_settings['show_viewport']
modifier.show_render = modifier_settings['show_render']
def set_armature_modifier_visibility(obj, show_viewport, show_render):
"""Armatureモディファイアの表示を設定"""
for modifier in obj.modifiers:
if modifier.type == 'ARMATURE':
modifier.show_viewport = show_viewport
modifier.show_render = show_render
def set_armature_modifier_target_armature(obj, target_armature):
"""Armatureモディファイアの表示を設定"""
for modifier in obj.modifiers:
if modifier.type == 'ARMATURE':
modifier.object = target_armature
def apply_all_shapekeys(obj):
"""オブジェクトの全シェイプキーを適用する"""
if not obj.data.shape_keys:
return
# 基底シェイプキーは常にインデックス0
if obj.active_shape_key_index == 0 and len(obj.data.shape_keys.key_blocks) > 1:
obj.active_shape_key_index = 1
else:
obj.active_shape_key_index = 0
bpy.context.view_layer.objects.active = obj
bpy.ops.object.shape_key_remove(all=True, apply_mix=True)
def apply_modifiers(obj):
"""モディファイアを適用"""
bpy.context.view_layer.objects.active = obj
for modifier in obj.modifiers[:]: # スライスを使用してリストのコピーを作成
try:
bpy.ops.object.modifier_apply(modifier=modifier.name)
except Exception as e:
print(f"Failed to apply modifier {modifier.name}: {e}")
def apply_modifiers_keep_shapekeys_with_temp(obj):
"""一時オブジェクトを使用してシェイプキーを維持しながらモディファイアを適用する"""
if obj.type != 'MESH':
return
if not obj.data.shape_keys:
# シェイプキーがない場合は通常のモディファイア適用
bpy.context.view_layer.objects.active = obj
for modifier in obj.modifiers:
try:
bpy.ops.object.modifier_apply(modifier=modifier.name)
except Exception as e:
print(f"Failed to apply modifier {modifier.name}: {e}")
return
# グローバルカウンタの初期化(存在しない場合)
if not hasattr(apply_modifiers_keep_shapekeys_with_temp, 'counter'):
apply_modifiers_keep_shapekeys_with_temp.counter = 0
shape_keys = obj.data.shape_keys.key_blocks
temp_objects = []
# 各シェイプキーに対して一時オブジェクトを作成
for i, shape_key in enumerate(shape_keys):
if i == 0: # Basis は飛ばす
continue
# すべてのオブジェクトの選択を解除
bpy.ops.object.select_all(action='DESELECT')
# オブジェクトを複製
bpy.context.view_layer.objects.active = obj
obj.select_set(True)
bpy.ops.object.duplicate(linked=False)
temp_obj = bpy.context.active_object
temp_obj.name = f"t{apply_modifiers_keep_shapekeys_with_temp.counter}:{shape_key.name}"
apply_modifiers_keep_shapekeys_with_temp.counter += 1
temp_objects.append(temp_obj)
# 他のシェイプキーの値を0に、対象のシェイプキーの値を1に設定
for sk in temp_obj.data.shape_keys.key_blocks:
if sk.name == shape_key.name:
sk.value = 1.0
else:
sk.value = 0.0
# シェイプキーを適用
apply_all_shapekeys(temp_obj)
# Armature以外のモディファイアを適用
apply_modifiers(temp_obj)
# 元のオブジェクトの処理
bpy.context.view_layer.objects.active = obj
# まず全てのシェイプキーの値を0に設定
for sk in obj.data.shape_keys.key_blocks:
sk.value = 0.0
# シェイプキーを適用
apply_all_shapekeys(obj)
# モディファイアを適用
apply_modifiers(obj)
# 一時オブジェクトの形状を元のオブジェクトのシェイプキーとして追加
obj.shape_key_add(name="Basis")
for temp_obj in temp_objects:
# シェイプキーを追加
shape_key = obj.shape_key_add(name=temp_obj.name.split(':')[-1])
shape_key.interpolation = 'KEY_LINEAR'
if shape_key.name == "SymmetricDeformed":
shape_key.value = 1.0
# 頂点座標を設定
for i, vert in enumerate(temp_obj.data.vertices):
shape_key.data[i].co = vert.co.copy()
# 一時オブジェクトを削除
bpy.data.objects.remove(temp_obj, do_unlink=True)
def get_evaluated_mesh(obj):
"""モディファイア適用後のメッシュを取得"""
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_obj = obj.evaluated_get(depsgraph)
evaluated_mesh = evaluated_obj.data
# BMeshを作成して評価済みメッシュの情報を取得
bm = bmesh.new()
bm.from_mesh(evaluated_mesh)
bm.transform(obj.matrix_world)
return bm
def create_side_weight_groups(mesh_obj: bpy.types.Object, base_avatar_data: dict, clothing_armature: bpy.types.Object, clothing_avatar_data: dict) -> None:
"""
右半身と左半身のウェイト合計の頂点グループを作成
"""
# 左右のボーンを分類
left_bones, right_bones = set(), set()
center_bones = set()
# 左右で別のグループにする脚・足・足指・胸のボーン
leg_foot_chest_bones = {
"LeftUpperLeg", "RightUpperLeg", "LeftLowerLeg", "RightLowerLeg",
"LeftFoot", "RightFoot", "LeftToes", "RightToes", "LeftBreast", "RightBreast",
"LeftFootThumbProximal", "LeftFootThumbIntermediate", "LeftFootThumbDistal",
"LeftFootIndexProximal", "LeftFootIndexIntermediate", "LeftFootIndexDistal",
"LeftFootMiddleProximal", "LeftFootMiddleIntermediate", "LeftFootMiddleDistal",
"LeftFootRingProximal", "LeftFootRingIntermediate", "LeftFootRingDistal",
"LeftFootLittleProximal", "LeftFootLittleIntermediate", "LeftFootLittleDistal",
"RightFootThumbProximal", "RightFootThumbIntermediate", "RightFootThumbDistal",
"RightFootIndexProximal", "RightFootIndexIntermediate", "RightFootIndexDistal",
"RightFootMiddleProximal", "RightFootMiddleIntermediate", "RightFootMiddleDistal",
"RightFootRingProximal", "RightFootRingIntermediate", "RightFootRingDistal",
"RightFootLittleProximal", "RightFootLittleIntermediate", "RightFootLittleDistal"
}
# 右側グループに入れる指ボーン
right_group_fingers = {
"LeftThumbProximal", "LeftThumbIntermediate", "LeftThumbDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal",
"RightThumbProximal", "RightThumbIntermediate", "RightThumbDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal"
}
# 左側グループに入れる指ボーン
left_group_fingers = {
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal"
}
# 分離しない肩・腕・手のボーン(center_bones扱い)
excluded_bones = {
"LeftShoulder", "RightShoulder", "LeftUpperArm", "RightUpperArm",
"LeftLowerArm", "RightLowerArm", "LeftHand", "RightHand"
}
ignored_bones = {"Head"}
for bone_map in base_avatar_data.get("humanoidBones", []):
bone_name = bone_map["boneName"]
humanoid_name = bone_map["humanoidBoneName"]
if bone_name in ignored_bones:
continue
if humanoid_name in excluded_bones:
# 分離しない(center_bones扱い)
center_bones.add(bone_name)
elif humanoid_name in leg_foot_chest_bones:
# 脚・足・足指・胸は従来通り左右で分ける
if any(k in humanoid_name for k in ["Left", "left"]):
left_bones.add(bone_name)
elif any(k in humanoid_name for k in ["Right", "right"]):
right_bones.add(bone_name)
elif humanoid_name in right_group_fingers:
# 右側グループに入れる指ボーン
right_bones.add(bone_name)
elif humanoid_name in left_group_fingers:
# 左側グループに入れる指ボーン
left_bones.add(bone_name)
else:
center_bones.add(bone_name)
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_name = aux_set["humanoidBoneName"]
for aux_bone in aux_set["auxiliaryBones"]:
if humanoid_name in ignored_bones:
continue
if humanoid_name in excluded_bones:
# 分離しない(center_bones扱い)
center_bones.add(aux_bone)
elif humanoid_name in leg_foot_chest_bones:
# 脚・足・足指・胸は従来通り左右で分ける
if is_left_side_bone(aux_bone, humanoid_name):
left_bones.add(aux_bone)
elif is_right_side_bone(aux_bone, humanoid_name):
right_bones.add(aux_bone)
elif humanoid_name in right_group_fingers:
# 右側グループに入れる指ボーン
right_bones.add(aux_bone)
elif humanoid_name in left_group_fingers:
# 左側グループに入れる指ボーン
left_bones.add(aux_bone)
else:
center_bones.add(aux_bone)
clothing_bone_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
print(f"clothing_bone_to_humanoid: {clothing_bone_to_humanoid}")
for clothing_bone in clothing_armature.data.bones:
current_bone = clothing_bone
current_bone_name = current_bone.name
parent_humanoid_name = None
while current_bone:
if current_bone.name in clothing_bone_to_humanoid.keys():
parent_humanoid_name = clothing_bone_to_humanoid[current_bone.name]
break
current_bone = current_bone.parent
print(f"current_bone_name: {current_bone_name}, parent_humanoid_name: {parent_humanoid_name}")
if parent_humanoid_name:
if parent_humanoid_name in ignored_bones:
continue
if parent_humanoid_name in excluded_bones:
# 分離しない(center_bones扱い)
center_bones.add(current_bone_name)
elif parent_humanoid_name in leg_foot_chest_bones:
# 脚・足・足指・胸は従来通り左右で分ける
if is_left_side_bone(current_bone_name, parent_humanoid_name):
left_bones.add(current_bone_name)
elif is_right_side_bone(current_bone_name, parent_humanoid_name):
right_bones.add(current_bone_name)
elif parent_humanoid_name in right_group_fingers:
# 右側グループに入れる指ボーン
right_bones.add(current_bone_name)
elif parent_humanoid_name in left_group_fingers:
# 左側グループに入れる指ボーン
left_bones.add(current_bone_name)
else:
center_bones.add(current_bone_name)
# 既存の頂点グループを取得
vertex_groups = {vg.name: vg.index for vg in mesh_obj.vertex_groups}
# 新しい頂点グループを作成または既存のものをクリア
for side in ["RightSideWeights", "LeftSideWeights", "BothSideWeights"]:
if side in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.remove(mesh_obj.vertex_groups[side])
right_group = mesh_obj.vertex_groups.new(name="RightSideWeights")
left_group = mesh_obj.vertex_groups.new(name="LeftSideWeights")
both_group = mesh_obj.vertex_groups.new(name="BothSideWeights")
# 各頂点のウェイトを計算
for vert in mesh_obj.data.vertices:
right_weight = 0.0
left_weight = 0.0
for g in vert.groups:
group_name = mesh_obj.vertex_groups[g.group].name
weight = g.weight
if group_name in right_bones:
right_weight += weight
elif group_name in left_bones:
left_weight += weight
elif group_name in center_bones:
# 中央のボーンは両方に加算
right_weight += weight
left_weight += weight
# 新しい頂点グループにウェイトを設定
if right_weight > 0:
right_group.add([vert.index], right_weight, 'REPLACE')
if left_weight > 0:
left_group.add([vert.index], left_weight, 'REPLACE')
both_group.add([vert.index], 1.0, 'REPLACE')
def create_distance_falloff_transfer_mask(obj: bpy.types.Object,
base_avatar_data: dict,
group_name: str = "DistanceFalloffMask",
max_distance: float = 0.025,
min_distance: float = 0.002) -> bpy.types.VertexGroup:
"""
距離に基づいて減衰するTransferMask頂点グループを作成
Parameters:
obj: 対象のメッシュオブジェクト
base_avatar_data: ベースアバターのデータ
group_name: 生成する頂点グループの名前(デフォルト: "DistanceFalloffMask")
max_distance: ウェイトが0になる最大距離(デフォルト: 0.025)
min_distance: ウェイトが1になる最小距離(デフォルト: 0.002)
Returns:
bpy.types.VertexGroup: 生成された頂点グループ
"""
# 入力チェック
if obj.type != 'MESH':
print(f"Error: {obj.name} is not a mesh object")
return None
# ソースオブジェクト(Body.BaseAvatar)の取得
source_obj = bpy.data.objects.get("Body.BaseAvatar")
if not source_obj:
print("Error: Body.BaseAvatar not found")
return None
# モディファイア適用後のターゲットメッシュを取得
target_bm = get_evaluated_mesh(source_obj)
target_bm.faces.ensure_lookup_table()
# ターゲットメッシュのBVHツリーを作成
bvh = BVHTree.FromBMesh(target_bm)
# モディファイア適用後のソースメッシュを取得
source_bm = get_evaluated_mesh(obj)
source_bm.verts.ensure_lookup_table()
# 新しい頂点グループを作成
transfer_mask = obj.vertex_groups.new(name=group_name)
# 各頂点を処理
for vert_idx, vert in enumerate(obj.data.vertices):
# モディファイア適用後の頂点位置を使用
evaluated_vertex_co = source_bm.verts[vert_idx].co
# 最近接点と法線を取得
location, normal, index, distance = bvh.find_nearest(evaluated_vertex_co)
if location is not None:
# 距離に基づいてベースウェイトを計算
if distance > max_distance:
weight = 0.0
else:
d = distance - min_distance
if d < 0.0:
d = 0.0
weight = 1.0 - d / (max_distance - min_distance)
# 頂点グループに追加
transfer_mask.add([vert_idx], weight, 'REPLACE')
# BMeshをクリーンアップ
source_bm.free()
target_bm.free()
return transfer_mask
def get_humanoid_and_auxiliary_bone_groups(base_avatar_data):
"""HumanoidボーンとAuxiliaryボーンの頂点グループを取得"""
bone_groups = set()
# Humanoidボーンを追加
for bone_map in base_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
bone_groups.add(bone_map["boneName"])
# Auxiliaryボーンを追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set.get("auxiliaryBones", []):
bone_groups.add(aux_bone)
return bone_groups
def get_humanoid_and_auxiliary_bone_groups_with_intermediate(base_armature: bpy.types.Object, base_avatar_data: dict) -> set:
bone_groups = set()
# まず基本のHumanoidボーンとAuxiliaryボーンを追加
humanoid_bones = set()
humanoid_name_to_bone = {} # humanoidBoneName -> boneName のマッピング
for bone_map in base_avatar_data.get("humanoidBones", []):
if "boneName" in bone_map:
bone_name = bone_map["boneName"]
bone_groups.add(bone_name)
humanoid_bones.add(bone_name)
if "humanoidBoneName" in bone_map:
humanoid_name_to_bone[bone_map["humanoidBoneName"]] = bone_name
# Hipsボーンの実際のボーン名を取得
hips_bone_name = humanoid_name_to_bone.get("Hips")
# Auxiliaryボーンとその所属Humanoidボーンのマッピングを作成
auxiliary_to_humanoid = {}
humanoid_to_auxiliaries = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_bone_name = aux_set.get("humanoidBoneName")
auxiliaries = aux_set.get("auxiliaryBones", [])
# Humanoidボーン名から実際のボーン名を取得
actual_humanoid_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == humanoid_bone_name:
actual_humanoid_bone = bone_map.get("boneName")
break
if actual_humanoid_bone:
humanoid_to_auxiliaries[actual_humanoid_bone] = set(auxiliaries)
for aux_bone in auxiliaries:
bone_groups.add(aux_bone)
auxiliary_to_humanoid[aux_bone] = actual_humanoid_bone
# 中間ボーンを検出・追加
if base_armature and base_armature.pose:
# Humanoidボーンの親辿り処理
for bone in base_armature.pose.bones:
if bone.name in humanoid_bones:
# Hipsボーンの場合は特別処理:ルートまでのすべての親ボーンを追加
if bone.name == hips_bone_name:
current_parent = bone.parent
while current_parent:
bone_groups.add(current_parent.name)
current_parent = current_parent.parent
else:
# 通常のHumanoidボーンの処理
# このHumanoidボーンの親を辿る
current_parent = bone.parent
intermediate_bones = []
while current_parent:
if current_parent.name in humanoid_bones:
# 親のHumanoidボーンに到達したら、中間ボーンをすべて追加
bone_groups.update(intermediate_bones)
break
else:
# 中間ボーンとして記録
intermediate_bones.append(current_parent.name)
current_parent = current_parent.parent
# Auxiliaryボーンの親辿り処理
for aux_bone_name in auxiliary_to_humanoid.keys():
if aux_bone_name in base_armature.pose.bones:
bone = base_armature.pose.bones[aux_bone_name]
parent_humanoid_bone = auxiliary_to_humanoid[aux_bone_name]
same_group_bones = {parent_humanoid_bone} | humanoid_to_auxiliaries.get(parent_humanoid_bone, set())
# このAuxiliaryボーンの親を辿る
current_parent = bone.parent
intermediate_bones = []
while current_parent:
if current_parent.name in same_group_bones:
# 同じグループのボーンに到達したら、中間ボーンをすべて追加
bone_groups.update(intermediate_bones)
break
else:
# 中間ボーンとして記録
intermediate_bones.append(current_parent.name)
current_parent = current_parent.parent
return bone_groups
def normalize_connected_components_weights(obj, base_avatar_data):
"""
メッシュの連結成分ごとにウェイトを正規化する
連結成分内のHumanoidボーンと補助ボーンのすべてのグループのウェイトが一様な場合のみ処理を行う
Parameters:
obj: 処理対象のメッシュオブジェクト
base_avatar_data: ベースアバターデータ
"""
# BMeshを作成
bm = bmesh.new()
bm.from_mesh(obj.data)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
bm.faces.ensure_lookup_table()
# 連結成分を見つける
def find_connected_component(start_vert, visited):
"""深さ優先探索で連結成分を見つける"""
component = {start_vert.index}
stack = [start_vert]
while stack:
current = stack.pop()
for edge in current.link_edges:
other = edge.other_vert(current)
if other.index not in visited:
visited.add(other.index)
component.add(other.index)
stack.append(other)
return component
# すべての連結成分を取得
visited = set()
components = []
for vert in bm.verts:
if vert.index not in visited:
visited.add(vert.index)
component = find_connected_component(vert, visited)
components.append(component)
# チェック対象の頂点グループを取得
target_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# メッシュ内に存在する対象グループのみを抽出
existing_target_groups = {vg.name for vg in obj.vertex_groups if vg.name in target_groups}
# 各連結成分に対して処理
for component in components:
# コンポーネント内の各頂点のウェイトパターンを収集
vertex_weights = []
for vert_idx in component:
vert = obj.data.vertices[vert_idx]
weights = {group: 0.0 for group in existing_target_groups}
for g in vert.groups:
group_name = obj.vertex_groups[g.group].name
if group_name in existing_target_groups:
weights[group_name] = g.weight
vertex_weights.append(weights)
# チェック対象のすべてのグループで同じウェイトパターンかチェック
is_uniform = True
first_weights = vertex_weights[0]
for weights in vertex_weights[1:]:
for group_name in existing_target_groups:
if abs(weights[group_name] - first_weights[group_name]) >= 0.0001:
is_uniform = False
break
if not is_uniform:
break
# すべての対象グループのウェイトが一様な場合、平均値を適用
if is_uniform:
# 平均ウェイトを計算(この場合はすべての頂点で同じなので最初の頂点のウェイトを使用)
avg_weights = first_weights
# すべての頂点に平均ウェイトを適用
for vert_idx in component:
for group_name, avg_weight in avg_weights.items():
group = obj.vertex_groups[group_name]
if avg_weight > 0:
group.add([vert_idx], avg_weight, 'REPLACE')
else:
try:
group.remove([vert_idx])
except RuntimeError:
pass
# BMeshを解放
bm.free()
def adjust_hand_weights(target_obj, armature, base_avatar_data):
def get_bone_name(humanoid_bone_name):
"""Humanoidボーン名から実際のボーン名を取得"""
for bone_data in base_avatar_data.get("humanoidBones", []):
if bone_data.get("humanoidBoneName") == humanoid_bone_name:
return bone_data.get("boneName")
return None
def get_finger_bones(side_prefix):
"""指のボーン名を取得(足の指は除外)"""
finger_bones = []
finger_types = ["Thumb", "Index", "Middle", "Ring", "Little"]
positions = ["Proximal", "Intermediate", "Distal"]
for finger in finger_types:
for pos in positions:
humanoid_name = f"{side_prefix}{finger}{pos}"
# "Foot"を含まないHumanoidボーン名のみを処理
if "Foot" not in humanoid_name:
bone_name = get_bone_name(humanoid_name)
if bone_name:
finger_bones.append(bone_name)
return finger_bones
def get_bone_head_world(bone_name):
"""ボーンのhead位置をワールド座標で取得"""
bone = armature.pose.bones[bone_name]
return armature.matrix_world @ bone.head
def get_lowerarm_and_auxiliary_bones(side_prefix):
"""LowerArmとその補助ボーンを取得"""
lower_arm_bones = []
# LowerArmボーンを追加
lower_arm_name = get_bone_name(f"{side_prefix}LowerArm")
if lower_arm_name:
lower_arm_bones.append(lower_arm_name)
# 補助ボーンを追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] == f"{side_prefix}LowerArm":
lower_arm_bones.extend(aux_set["auxiliaryBones"])
return lower_arm_bones
def find_closest_lower_arm_bone(hand_head_pos, lower_arm_bones):
"""手のボーンのHeadに最も近いLowerArmまたは補助ボーンを見つける"""
closest_bone = None
min_distance = float('inf')
for bone_name in lower_arm_bones:
if bone_name in armature.pose.bones:
bone_head = get_bone_head_world(bone_name)
distance = (Vector(bone_head) - hand_head_pos).length
if distance < min_distance:
min_distance = distance
closest_bone = bone_name
return closest_bone
def process_hand(is_right):
# 手の種類に応じてHumanoidボーン名を設定
side = "Right" if is_right else "Left"
hand_bone_name = get_bone_name(f"{side}Hand")
lower_arm_bone_name = get_bone_name(f"{side}LowerArm")
if not hand_bone_name or not lower_arm_bone_name:
return
# 手と指のボーン名を収集
vertex_groups = [hand_bone_name] + get_finger_bones(side)
# ボーンの位置をワールド座標で取得
hand_head = Vector(get_bone_head_world(hand_bone_name))
lower_arm_head = Vector(get_bone_head_world(lower_arm_bone_name))
# 先端方向ベクトルを計算
tip_direction = (hand_head - lower_arm_head).normalized()
# 最小角度を探す
min_angle = float('inf')
has_weight = False
# 各頂点について処理
for v in target_obj.data.vertices:
has_vertex_weight = False
for group_name in vertex_groups:
if group_name not in target_obj.vertex_groups:
continue
weight = 0
try:
for g in v.groups:
if g.group == target_obj.vertex_groups[group_name].index:
weight = g.weight
break
if weight > 0:
has_weight = True
has_vertex_weight = True
except RuntimeError:
continue
# この頂点が手または指のウェイトを持っている場合
if has_vertex_weight:
# 頂点のワールド座標を計算
vertex_world = target_obj.matrix_world @ Vector(v.co)
# 頂点からhandボーンへのベクトル
vertex_vector = (vertex_world - hand_head).normalized()
# 角度を計算 (0-180度の範囲に収める)
# dot productを使用して角度を計算
dot_product = vertex_vector.dot(tip_direction)
# -1.0から1.0の範囲にクランプ
dot_product = max(min(dot_product, 1.0), -1.0)
angle = np.degrees(np.arccos(dot_product))
min_angle = min(min_angle, angle)
if not has_weight:
return
# 70度以上の場合の処理
if min_angle >= 70:
print(f"- Minimum angle exceeds 70 degrees ({min_angle} degrees), transferring weights for {side} hand")
# LowerArmとその補助ボーンを取得
lower_arm_bones = get_lowerarm_and_auxiliary_bones(side)
# 手のボーンのHeadに最も近いLowerArmボーンを見つける
closest_bone = find_closest_lower_arm_bone(hand_head, lower_arm_bones)
if closest_bone:
print(f"- Transferring weights to {closest_bone}")
# 各頂点について処理
for v in target_obj.data.vertices:
total_weight = 0.0
# 手と指のボーンのウェイトを合計
for group_name in vertex_groups:
if group_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[group_name]
try:
for g in v.groups:
if g.group == group.index:
total_weight += g.weight
break
except RuntimeError:
continue
# ウェイトを最も近いLowerArmボーンに転送
if total_weight > 0:
if closest_bone not in target_obj.vertex_groups:
target_obj.vertex_groups.new(name=closest_bone)
target_obj.vertex_groups[closest_bone].add([v.index], total_weight, 'ADD')
# 元のウェイトを削除
for group_name in vertex_groups:
if group_name in target_obj.vertex_groups:
try:
target_obj.vertex_groups[group_name].remove([v.index])
except RuntimeError:
continue
else:
print(f"Warning: No suitable LowerArm bone found for {side} hand")
else:
print(f"- Minimum angle is within acceptable range ({min_angle} degrees), keeping weights for {side} hand")
# 両手の処理を実行
process_hand(is_right=True)
process_hand(is_right=False)
def create_distance_normal_based_vertex_group(body_obj, cloth_obj, distance_threshold=0.1, min_distance_threshold=0.005, angle_threshold=30.0, new_group_name="InpaintMask", normal_radius=0.01, filter_mask=None):
"""
素体メッシュからの距離と法線角度に基づいて衣装メッシュに頂点グループを作成します
Parameters:
body_obj (obj): 素体メッシュのオブジェクト名
cloth_obj (obj): 衣装メッシュのオブジェクト名
distance_threshold (float): この距離以上離れている場合、ウェイトを1.0に設定
min_distance_threshold (float): この距離以下の場合、ウェイトを0.0に設定
angle_threshold (float): この角度以上の場合、ウェイトを1.0に設定(度単位)
new_group_name (str): 作成する頂点グループ名
normal_radius (float): 面の近傍検索を行う際に考慮する球体の半径
filter_mask (obj): フィルタリングに使用する頂点グループ
"""
start_time = time.time()
if not body_obj or not cloth_obj:
print("指定されたオブジェクトが見つかりません")
return
# 現在のモードを保存
current_mode = bpy.context.object.mode
# オブジェクトモードに切り替え
bpy.ops.object.mode_set(mode='OBJECT')
# 衣装オブジェクトを選択してアクティブに
bpy.ops.object.select_all(action='DESELECT')
cloth_obj.select_set(True)
bpy.context.view_layer.objects.active = cloth_obj
# BVHツリーを作成(高速な最近傍点検索のため)
# モディファイア適用後のターゲットメッシュを取得
body_bm_time_start = time.time()
body_bm = get_evaluated_mesh(body_obj)
body_bm.verts.ensure_lookup_table()
body_bm.faces.ensure_lookup_table()
body_bm.normal_update()
body_bm_time = time.time() - body_bm_time_start
print(f" Body BMesh作成: {body_bm_time:.2f}秒")
# ターゲットメッシュのBVHツリーを作成
bvh_time_start = time.time()
bvh_tree = BVHTree.FromBMesh(body_bm)
bvh_time = time.time() - bvh_time_start
print(f" BVHツリー作成: {bvh_time:.2f}秒")
# 頂点グループがまだ存在しない場合は作成
if new_group_name not in cloth_obj.vertex_groups:
cloth_obj.vertex_groups.new(name=new_group_name)
vertex_group = cloth_obj.vertex_groups[new_group_name]
# 角度のしきい値をラジアンに変換
angle_threshold_rad = math.radians(angle_threshold)
# モディファイア適用後のソースメッシュを取得
cloth_bm_time_start = time.time()
cloth_bm = get_evaluated_mesh(cloth_obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
cloth_bm.normal_update()
cloth_bm_time = time.time() - cloth_bm_time_start
print(f" Cloth BMesh作成: {cloth_bm_time:.2f}秒")
# トランスフォームマトリックスをキャッシュ(繰り返しの計算を避けるため)
body_normal_matrix = body_obj.matrix_world.inverted().transposed()
cloth_normal_matrix = cloth_obj.matrix_world.inverted().transposed()
# 修正した法線を格納する辞書
adjusted_normals_time_start = time.time()
adjusted_normals = {}
# 衣装メッシュの各頂点の法線処理(逆転の必要があるかチェック)
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置と法線
cloth_vert_world = vertex.co
original_normal_world = (Vector((vertex.normal[0], vertex.normal[1], vertex.normal[2], 0))).xyz.normalized()
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面の法線をワールド座標系に変換
face_normal_world = (Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 内積が負の場合、法線を反転
dot_product = original_normal_world.dot(face_normal_world)
if dot_product < 0:
adjusted_normal = -original_normal_world
else:
adjusted_normal = original_normal_world
# 調整済み法線を辞書に保存
adjusted_normals[i] = adjusted_normal
else:
# 最近傍点が見つからない場合は元の法線を使用
adjusted_normals[i] = original_normal_world
adjusted_normals_time = time.time() - adjusted_normals_time_start
print(f" 法線調整: {adjusted_normals_time:.2f}秒")
# 面の中心点と面積を事前計算してキャッシュ
face_cache_time_start = time.time()
face_centers = []
face_areas = {}
face_adjusted_normals = {}
face_indices = []
for face in cloth_bm.faces:
# 面の中心点を計算
center = Vector((0, 0, 0))
for v in face.verts:
center += v.co
center /= len(face.verts)
face_centers.append(center)
face_indices.append(face.index)
# 面積を計算
face_areas[face.index] = face.calc_area()
# 面の調整済み法線を計算
face_normal = Vector((0, 0, 0))
for v in face.verts:
face_normal += adjusted_normals[v.index]
face_adjusted_normals[face.index] = face_normal.normalized()
face_cache_time = time.time() - face_cache_time_start
print(f" 面キャッシュ作成: {face_cache_time:.2f}秒")
# 衣装メッシュの面に対してKDTreeを構築
kdtree_time_start = time.time()
# size = len(cloth_bm.faces)
# kd = mathutils.kdtree.KDTree(size)
# for face_index, center in face_centers.items():
# kd.insert(center, face_index)
# kd.balance()
kd = cKDTree(face_centers)
# 衣装メッシュの頂点に対してKDTreeを構築(新しい実装用)
vertex_positions = []
for vertex in cloth_bm.verts:
vertex_positions.append(vertex.co)
vertex_kd = cKDTree(vertex_positions)
kdtree_time = time.time() - kdtree_time_start
print(f" KDTree構築: {kdtree_time:.2f}秒")
# 各頂点から一定距離内に面の一部が存在する面を検索するための準備完了
normal_avg_time_start = time.time()
normal_avg_time = time.time() - normal_avg_time_start
print(f" 面の近傍検索準備完了: {normal_avg_time:.2f}秒")
# ----------------------------------
# 衣装メッシュの各頂点に対して処理
weight_calc_time_start = time.time()
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置
cloth_vert_world = vertex.co
# 調整済みの法線を使用
cloth_normal_world = adjusted_normals[i]
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
distance = float('inf') # 初期値として無限大を設定
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面上の最近接点を計算
closest_point_on_face = mathutils.geometry.closest_point_on_tri(
cloth_vert_world,
face.verts[0].co,
face.verts[1].co,
face.verts[2].co
)
# 面の法線をワールド座標系に変換
face_normal_world = (Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 距離を計算
distance = (cloth_vert_world - closest_point_on_face).length
# 最近傍点と法線を設定
nearest_point = closest_point_on_face
nearest_normal = face_normal_world
else:
# 最近傍点が見つからない場合は初期値をNoneに設定
nearest_point = None
nearest_normal = None
# 頂点ウェイトの初期値
weight = 0.0
if nearest_point and distance >= min_distance_threshold:
# 距離に基づくウェイト
if distance >= distance_threshold:
weight = 1.0
# 法線角度に基づくウェイト(新しいロジック)
if weight < 1.0 and nearest_normal:
# 衣装メッシュの頂点から一定距離内の面をすべて取得
min_angle = float('inf')
# cloth_vert_worldからnormal_radiusの範囲内に少なくとも一つの頂点を含む面を検索
nearby_vertex_indices = vertex_kd.query_ball_point(cloth_vert_world, normal_radius)
nearby_faces = set()
# 近傍頂点を含む面を検索
for vertex_idx in nearby_vertex_indices:
vertex = cloth_bm.verts[vertex_idx]
for face in vertex.link_faces:
nearby_faces.add(face.index)
nearby_faces = list(nearby_faces)
if nearby_faces:
for face_index in nearby_faces:
# 面の法線を取得
face_normal = face_adjusted_normals[face_index]
# 素体メッシュの最近接面の法線との角度を計算
angle = math.acos(min(1.0, max(-1.0, face_normal.dot(nearest_normal))))
# 90度以上の場合は法線を反転して再計算
if angle > math.pi / 2:
inverted_normal = -nearest_normal
angle = math.acos(min(1.0, max(-1.0, face_normal.dot(inverted_normal))))
# 最小角度を更新
min_angle = min(min_angle, angle)
else:
# 近傍面が見つからない場合は元の頂点法線を使用
original_adjusted_normal = adjusted_normals[i]
min_angle = math.acos(min(1.0, max(-1.0, original_adjusted_normal.dot(nearest_normal))))
# 90度以上の場合は法線を反転して再計算
if min_angle > math.pi / 2:
inverted_normal = -nearest_normal
min_angle = math.acos(min(1.0, max(-1.0, original_adjusted_normal.dot(inverted_normal))))
# 最小角度のしきい値を超えた場合
if min_angle >= angle_threshold_rad:
weight = 1.0
# 頂点グループにウェイトを設定
vertex_group.add([i], weight, 'REPLACE')
weight_calc_time = time.time() - weight_calc_time_start
print(f" ウェイト計算: {weight_calc_time:.2f}秒")
# 頂点グループをアクティブに設定
cloth_obj.vertex_groups.active_index = vertex_group.index
# Weight Paintモードに切り替え
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
# スムージング処理を実行(アクティブな頂点グループに適用される)
smooth_time_start = time.time()
bpy.ops.object.vertex_group_smooth(factor=0.3, repeat=10, expand=0.25)
# クリーニング処理も適用
bpy.ops.object.vertex_group_clean(group_select_mode='ACTIVE', limit=0.5)
smooth_time = time.time() - smooth_time_start
print(f" スムージング処理: {smooth_time:.2f}秒")
apply_max_filter_to_vertex_group(cloth_obj, new_group_name, filter_radius=0.01, filter_mask=filter_mask)
apply_min_filter_to_vertex_group(cloth_obj, new_group_name, filter_radius=0.01, filter_mask=filter_mask)
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
# BMeshをクリーンアップ
body_bm.free()
cloth_bm.free()
total_time = time.time() - start_time
print(f"{new_group_name}頂点グループを作成しました (合計時間: {total_time:.2f}秒)")
return vertex_group
def apply_smoothing_to_vertex_group(cloth_obj, vertex_group_name, smoothing_radius=0.02, iteration=1, use_distance_weighting=True, gaussian_falloff=True, neighbors_cache=None):
"""
指定された頂点グループに対してスムージング処理を適用します
距離による重み付きスムージングを使用して、頂点密度の偏りに対して頑健な結果を得ます
Parameters:
cloth_obj (obj): 衣装メッシュのオブジェクト
vertex_group_name (str): 対象の頂点グループ名
smoothing_radius (float): スムージング適用半径
use_distance_weighting (bool): 距離による重み付けを使用するかどうか
gaussian_falloff (bool): ガウシアン減衰を使用するかどうか
"""
start_time = time.time()
if vertex_group_name not in cloth_obj.vertex_groups:
print(f"エラー: 頂点グループ '{vertex_group_name}' が見つかりません")
return
vertex_group = cloth_obj.vertex_groups[vertex_group_name]
# 現在のモードを保存
current_mode = bpy.context.object.mode
bpy.ops.object.mode_set(mode='OBJECT')
# モディファイア適用後のメッシュを取得
cloth_bm = get_evaluated_mesh(cloth_obj)
cloth_bm.verts.ensure_lookup_table()
# 頂点座標をnumpy配列に変換
vertex_coords = np.array([v.co for v in cloth_bm.verts])
num_vertices = len(vertex_coords)
# 現在のウェイト値を取得
current_weights = np.zeros(num_vertices, dtype=np.float32)
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == vertex_group.index:
current_weights[i] = group.weight
break
# cKDTreeを使用して近傍検索を効率化
kdtree = cKDTree(vertex_coords)
# スムージング済みウェイト配列を初期化
smoothed_weights = np.copy(current_weights)
print(f" スムージング処理開始 (半径: {smoothing_radius}, 距離重み付け: {use_distance_weighting}, ガウシアン減衰: {gaussian_falloff})")
# ガウシアン関数のシグマ値(半径の1/3程度が適切)
sigma = smoothing_radius / 3.0
# 最初のイテレーションでneighbor_indicesをキャッシュ
if neighbors_cache is None:
neighbors_cache = {}
for iteration_idx in range(iteration):
# 各頂点に対してスムージングを適用
for i in range(num_vertices):
# 最初のイテレーションでneighbor_indicesを計算・キャッシュ、二回目以降はキャッシュを使用
if iteration_idx == 0:
if i not in neighbors_cache:
neighbor_indices = kdtree.query_ball_point(vertex_coords[i], smoothing_radius)
neighbors_cache[i] = neighbor_indices
else:
neighbor_indices = neighbors_cache[i]
else:
neighbor_indices = neighbors_cache[i]
if len(neighbor_indices) > 1: # 自分自身以外の近傍が存在する場合
# 近傍頂点への距離を計算
neighbor_coords = vertex_coords[neighbor_indices]
distances = np.linalg.norm(neighbor_coords - vertex_coords[i], axis=1)
# 近傍頂点のウェイト値を取得
neighbor_weights = current_weights[neighbor_indices]
if use_distance_weighting:
if gaussian_falloff:
# ガウシアン減衰による重み計算
weights = np.exp(-(distances ** 2) / (2 * sigma ** 2))
else:
# 線形減衰による重み計算
weights = np.maximum(0, 1.0 - distances / smoothing_radius)
# 自分自身の重みを少し強めに設定(オリジナル値の保持)
# self_index = np.where(distances == 0)[0]
# if len(self_index) > 0:
# weights[self_index[0]] *= 2.0
# 重み付き平均を計算
if np.sum(weights) > 0.001:
smoothed_weights[i] = np.sum(neighbor_weights * weights) / np.sum(weights)
else:
smoothed_weights[i] = current_weights[i]
else:
# 従来の単純平均
smoothed_weights[i] = np.mean(neighbor_weights)
else:
# 近傍頂点が自分だけの場合は元の値を保持
smoothed_weights[i] = current_weights[i]
current_weights = np.copy(smoothed_weights)
# 新しいウェイトを頂点グループに適用
for i in range(num_vertices):
vertex_group.add([i], smoothed_weights[i], 'REPLACE')
# BMeshをクリーンアップ
cloth_bm.free()
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
total_time = time.time() - start_time
print(f" スムージング完了: {total_time:.2f}秒")
return neighbors_cache
def apply_max_filter_to_vertex_group(cloth_obj, vertex_group_name, filter_radius=0.02, filter_mask=None):
"""
頂点グループに対してMaxフィルターを適用します
各頂点から一定距離内にある頂点のウェイトの最大値を取得し、その値を新しいウェイトとして設定します
Parameters:
cloth_obj (obj): 衣装メッシュのオブジェクト
vertex_group_name (str): 対象の頂点グループ名
filter_radius (float): フィルター適用半径
filter_mask (obj): フィルタリングに使用する頂点グループ
"""
start_time = time.time()
if vertex_group_name not in cloth_obj.vertex_groups:
print(f"エラー: 頂点グループ '{vertex_group_name}' が見つかりません")
return
vertex_group = cloth_obj.vertex_groups[vertex_group_name]
# 現在のモードを保存
current_mode = bpy.context.object.mode
bpy.ops.object.mode_set(mode='OBJECT')
# モディファイア適用後のメッシュを取得
cloth_bm = get_evaluated_mesh(cloth_obj)
cloth_bm.verts.ensure_lookup_table()
# 頂点座標をnumpy配列に変換
vertex_coords = np.array([v.co for v in cloth_bm.verts])
num_vertices = len(vertex_coords)
# 現在のウェイト値を取得
current_weights = np.zeros(num_vertices, dtype=np.float32)
for i, vertex in enumerate(cloth_bm.verts):
# 頂点グループのウェイトを取得
weight = 0.0
for group in cloth_obj.data.vertices[i].groups:
if group.group == vertex_group.index:
weight = group.weight
break
current_weights[i] = weight
# cKDTreeを使用して近傍検索を効率化
kdtree = cKDTree(vertex_coords)
# 新しいウェイト配列を初期化
new_weights = np.copy(current_weights)
print(f" Maxフィルター処理開始 (半径: {filter_radius})")
# 各頂点に対してMaxフィルターを適用
for i in range(num_vertices):
# 一定半径内の近傍頂点のインデックスを取得
neighbor_indices = kdtree.query_ball_point(vertex_coords[i], filter_radius)
if neighbor_indices:
# 近傍頂点のウェイトの最大値を取得
neighbor_weights = current_weights[neighbor_indices]
max_weight = np.max(neighbor_weights)
if filter_mask is not None:
new_weights[i] = filter_mask[i] * max_weight + (1 - filter_mask[i]) * current_weights[i]
else:
new_weights[i] = max_weight
# 新しいウェイトを頂点グループに適用
for i in range(num_vertices):
vertex_group.add([i], new_weights[i], 'REPLACE')
# BMeshをクリーンアップ
cloth_bm.free()
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
total_time = time.time() - start_time
print(f" Maxフィルター完了: {total_time:.2f}秒")
def apply_min_filter_to_vertex_group(cloth_obj, vertex_group_name, filter_radius=0.02, filter_mask=None):
"""
頂点グループに対してMinフィルターを適用します
各頂点から一定距離内にある頂点のウェイトの最小値を取得し、その値を新しいウェイトとして設定します
Parameters:
cloth_obj (obj): 衣装メッシュのオブジェクト
vertex_group_name (str): 対象の頂点グループ名
filter_radius (float): フィルター適用半径
filter_mask (obj): フィルタリングに使用する頂点グループ
"""
start_time = time.time()
if vertex_group_name not in cloth_obj.vertex_groups:
print(f"エラー: 頂点グループ '{vertex_group_name}' が見つかりません")
return
vertex_group = cloth_obj.vertex_groups[vertex_group_name]
# 現在のモードを保存
current_mode = bpy.context.object.mode
bpy.ops.object.mode_set(mode='OBJECT')
# モディファイア適用後のメッシュを取得
cloth_bm = get_evaluated_mesh(cloth_obj)
cloth_bm.verts.ensure_lookup_table()
# 頂点座標をnumpy配列に変換
vertex_coords = np.array([v.co for v in cloth_bm.verts])
num_vertices = len(vertex_coords)
# 現在のウェイト値を取得
current_weights = np.zeros(num_vertices, dtype=np.float32)
for i, vertex in enumerate(cloth_bm.verts):
# 頂点グループのウェイトを取得
weight = 0.0
for group in cloth_obj.data.vertices[i].groups:
if group.group == vertex_group.index:
weight = group.weight
break
current_weights[i] = weight
# cKDTreeを使用して近傍検索を効率化
kdtree = cKDTree(vertex_coords)
# 新しいウェイト配列を初期化
new_weights = np.copy(current_weights)
print(f" Minフィルター処理開始 (半径: {filter_radius})")
# 各頂点に対してMinフィルターを適用
for i in range(num_vertices):
# 一定半径内の近傍頂点のインデックスを取得
neighbor_indices = kdtree.query_ball_point(vertex_coords[i], filter_radius)
if neighbor_indices:
# 近傍頂点のウェイトの最小値を取得
neighbor_weights = current_weights[neighbor_indices]
min_weight = np.min(neighbor_weights)
if filter_mask is not None:
new_weights[i] = filter_mask[i] * min_weight + (1 - filter_mask[i]) * current_weights[i]
else:
new_weights[i] = min_weight
# 新しいウェイトを頂点グループに適用
for i in range(num_vertices):
vertex_group.add([i], new_weights[i], 'REPLACE')
# BMeshをクリーンアップ
cloth_bm.free()
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
total_time = time.time() - start_time
print(f" Minフィルター完了: {total_time:.2f}秒")
def get_mesh_cache_key(obj):
"""
オブジェクトのキャッシュキーを生成します
メッシュの頂点数、面数、モディファイアの有無を考慮してハッシュを作成
"""
mesh = obj.data
modifiers_str = "_".join([mod.name + str(mod.type) for mod in obj.modifiers])
cache_key = f"{obj.name}_{len(mesh.vertices)}_{len(mesh.polygons)}_{modifiers_str}_{obj.matrix_world.determinant():.6f}"
return cache_key
def get_cached_mesh_data(obj, smoothing_radius):
"""
オブジェクトのメッシュデータをキャッシュから取得、または新規作成してキャッシュに保存
"""
global _mesh_cache
cache_key = get_mesh_cache_key(obj)
radius_key = f"{cache_key}_{smoothing_radius:.6f}"
if radius_key in _mesh_cache:
print(f" キャッシュからメッシュデータを取得: {obj.name}")
return _mesh_cache[radius_key]
print(f" メッシュデータを新規作成: {obj.name}")
# モディファイア適用後のメッシュを取得
cloth_bm = get_evaluated_mesh(obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
# 頂点座標と法線を取得
vertex_coords = np.array([v.co for v in cloth_bm.verts])
vertex_normals = np.array([v.normal for v in cloth_bm.verts])
# cKDTreeを構築
kdtree = cKDTree(vertex_coords)
# すべての頂点の近傍情報を事前計算
neighbor_data = {}
for i in range(len(vertex_coords)):
neighbor_indices = kdtree.query_ball_point(vertex_coords[i], smoothing_radius)
neighbor_data[i] = neighbor_indices
# キャッシュデータを作成
cache_data = {
'vertex_coords': vertex_coords,
'vertex_normals': vertex_normals,
'kdtree': kdtree,
'neighbor_data': neighbor_data,
'bmesh': cloth_bm
}
# キャッシュに保存
_mesh_cache[radius_key] = cache_data
return cache_data
def clear_mesh_cache():
"""
メッシュキャッシュをクリアします
"""
global _mesh_cache
# BMeshオブジェクトを解放
for cache_data in _mesh_cache.values():
if 'bmesh' in cache_data and cache_data['bmesh']:
cache_data['bmesh'].free()
_mesh_cache.clear()
print("メッシュキャッシュをクリアしました")
def apply_distance_normal_based_smoothing(body_obj, cloth_obj, distance_min=0.0, distance_max=0.1, angle_min=0.0, angle_max=30.0, new_group_name="InpaintMask", normal_radius=0.01, smoothing_mask_groups=None, target_vertex_groups=None, smoothing_radius=0.02, mask_group_name=None):
"""
素体メッシュからの距離と法線角度に基づいて衣装メッシュに頂点グループを作成し、スムージングを適用します
Parameters:
body_obj (obj): 素体メッシュのオブジェクト名
cloth_obj (obj): 衣装メッシュのオブジェクト名
distance_min (float): 距離の最小値、この値以下では ウェイト0.0
distance_max (float): 距離の最大値、この値以上では ウェイト1.0
angle_min (float): 角度の最小値、この値以下では ウェイト0.0(度単位)
angle_max (float): 角度の最大値、この値以上では ウェイト1.0(度単位)
new_group_name (str): 作成する頂点グループ名
normal_radius (float): 法線の加重平均を計算する際に考慮する球体の半径
smoothing_mask_groups (list): スムージングマスクとして適用する頂点グループ名のリスト
target_vertex_groups (list): スムージング対象の頂点グループ名のリスト
smoothing_radius (float): スムージングに使用する距離
mask_group_name (str): スムージング処理結果の合成強度に対するマスク頂点グループの名前
"""
start_time = time.time()
if not body_obj or not cloth_obj:
print("指定されたオブジェクトが見つかりません")
return
# 現在のモードを保存
current_mode = bpy.context.object.mode
# オブジェクトモードに切り替え
bpy.ops.object.mode_set(mode='OBJECT')
# 衣装オブジェクトを選択してアクティブに
bpy.ops.object.select_all(action='DESELECT')
cloth_obj.select_set(True)
bpy.context.view_layer.objects.active = cloth_obj
# BVHツリーを作成(高速な最近傍点検索のため)
# モディファイア適用後のターゲットメッシュを取得
body_bm_time_start = time.time()
body_bm = get_evaluated_mesh(body_obj)
body_bm.faces.ensure_lookup_table()
body_bm_time = time.time() - body_bm_time_start
print(f" Body BMesh作成: {body_bm_time:.2f}秒")
# ターゲットメッシュのBVHツリーを作成
bvh_time_start = time.time()
bvh_tree = BVHTree.FromBMesh(body_bm)
bvh_time = time.time() - bvh_time_start
print(f" BVHツリー作成: {bvh_time:.2f}秒")
# 頂点グループがまだ存在しない場合は作成
if new_group_name not in cloth_obj.vertex_groups:
cloth_obj.vertex_groups.new(name=new_group_name)
vertex_group = cloth_obj.vertex_groups[new_group_name]
# 角度の最小値・最大値をラジアンに変換
angle_min_rad = math.radians(angle_min)
angle_max_rad = math.radians(angle_max)
# モディファイア適用後のソースメッシュを取得
cloth_bm_time_start = time.time()
cloth_bm = get_evaluated_mesh(cloth_obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
cloth_bm_time = time.time() - cloth_bm_time_start
print(f" Cloth BMesh作成: {cloth_bm_time:.2f}秒")
# トランスフォームマトリックスをキャッシュ(繰り返しの計算を避けるため)
body_normal_matrix = body_obj.matrix_world.inverted().transposed()
cloth_normal_matrix = cloth_obj.matrix_world.inverted().transposed()
# 修正した法線を格納する辞書
adjusted_normals_time_start = time.time()
adjusted_normals = {}
# 衣装メッシュの各頂点の法線処理(逆転の必要があるかチェック)
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置と法線
cloth_vert_world = vertex.co
original_normal_world = (cloth_normal_matrix @ Vector((vertex.normal[0], vertex.normal[1], vertex.normal[2], 0))).xyz.normalized()
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面の法線をワールド座標系に変換
face_normal_world = (body_normal_matrix @ Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 内積が負の場合、法線を反転
dot_product = original_normal_world.dot(face_normal_world)
if dot_product < 0:
adjusted_normal = -original_normal_world
else:
adjusted_normal = original_normal_world
# 調整済み法線を辞書に保存
adjusted_normals[i] = adjusted_normal
else:
# 最近傍点が見つからない場合は元の法線を使用
adjusted_normals[i] = original_normal_world
adjusted_normals_time = time.time() - adjusted_normals_time_start
print(f" 法線調整: {adjusted_normals_time:.2f}秒")
# 面の中心点と面積を事前計算してキャッシュ
face_cache_time_start = time.time()
face_centers = []
face_areas = {}
face_adjusted_normals = {}
face_indices = []
for face in cloth_bm.faces:
# 面の中心点を計算
center = Vector((0, 0, 0))
for v in face.verts:
center += v.co
center /= len(face.verts)
face_centers.append(center)
face_indices.append(face.index)
# 面積を計算
face_areas[face.index] = face.calc_area()
# 面の調整済み法線を計算
face_normal = Vector((0, 0, 0))
for v in face.verts:
face_normal += adjusted_normals[v.index]
face_adjusted_normals[face.index] = face_normal.normalized()
face_cache_time = time.time() - face_cache_time_start
print(f" 面キャッシュ作成: {face_cache_time:.2f}秒")
# 衣装メッシュの面に対してKDTreeを構築
kdtree_time_start = time.time()
# size = len(cloth_bm.faces)
# kd = mathutils.kdtree.KDTree(size)
# for face_index, center in face_centers.items():
# kd.insert(center, face_index)
# kd.balance()
kd = cKDTree(face_centers)
kdtree_time = time.time() - kdtree_time_start
print(f" KDTree構築: {kdtree_time:.2f}秒")
# 各頂点の法線を近傍面の法線の加重平均で更新
normal_avg_time_start = time.time()
for i, vertex in enumerate(cloth_bm.verts):
# 一定の半径内の面を検索
co = vertex.co
weighted_normal = Vector((0, 0, 0))
total_weight = 0
# KDTreeを使用して近傍の面を効率的に検索
for index in kd.query_ball_point(co, normal_radius):
# 距離に応じた重みを計算(距離が近いほど影響が大きい)
face_index = face_indices[index]
area = face_areas[face_index]
dist = (co - face_centers[index]).length
# 距離に基づく減衰係数
distance_factor = 1.0 - (dist / normal_radius) if dist < normal_radius else 0.0
weight = area * distance_factor
weighted_normal += face_adjusted_normals[face_index] * weight
total_weight += weight
# 重みの合計が0でない場合は正規化
if total_weight > 0:
weighted_normal /= total_weight
weighted_normal.normalize()
# 調整済み法線を更新
adjusted_normals[i] = weighted_normal
normal_avg_time = time.time() - normal_avg_time_start
print(f" 法線加重平均計算: {normal_avg_time:.2f}秒")
# ----------------------------------
# 衣装メッシュの各頂点に対して処理
weight_calc_time_start = time.time()
for i, vertex in enumerate(cloth_bm.verts):
# ワールド座標系での頂点位置
cloth_vert_world = vertex.co
# 調整済みの法線を使用
cloth_normal_world = adjusted_normals[i]
# 素体メッシュ上の最近傍面を検索
nearest_result = bvh_tree.find_nearest(cloth_vert_world)
distance = float('inf') # 初期値として無限大を設定
if nearest_result:
# BVHTree.find_nearest() は (co, normal, index, distance) を返す
nearest_point, nearest_normal, nearest_face_index, _ = nearest_result
# 最近傍面を取得
face = body_bm.faces[nearest_face_index]
face_normal = face.normal
# 面上の最近接点を計算
closest_point_on_face = mathutils.geometry.closest_point_on_tri(
cloth_vert_world,
face.verts[0].co,
face.verts[1].co,
face.verts[2].co
)
# 面の法線をワールド座標系に変換
face_normal_world = (body_normal_matrix @ Vector((face_normal[0], face_normal[1], face_normal[2], 0))).xyz.normalized()
# 距離を計算
distance = (cloth_vert_world - closest_point_on_face).length
# 最近傍点と法線を設定
nearest_point = closest_point_on_face
nearest_normal = face_normal_world
else:
# 最近傍点が見つからない場合は初期値をNoneに設定
nearest_point = None
nearest_normal = None
# 頂点ウェイトの初期値
weight = 0.0
if nearest_point:
# 距離に基づくウェイト(線形補間)
distance_weight = 0.0
if distance <= distance_min:
distance_weight = 0.0
elif distance >= distance_max:
distance_weight = 1.0
else:
# 線形補間
distance_weight = (distance - distance_min) / (distance_max - distance_min)
# 法線角度に基づくウェイト(線形補間)
angle_weight = 0.0
if nearest_normal:
# 法線の角度を計算
angle = math.acos(min(1.0, max(-1.0, cloth_normal_world.dot(nearest_normal))))
# 90度以上の場合は法線を反転して再計算
if angle > math.pi / 2:
inverted_normal = -nearest_normal
angle = math.acos(min(1.0, max(-1.0, cloth_normal_world.dot(inverted_normal))))
# 角度の線形補間
if angle <= angle_min_rad:
angle_weight = 0.0
elif angle >= angle_max_rad:
angle_weight = 1.0
else:
# 線形補間
angle_weight = (angle - angle_min_rad) / (angle_max_rad - angle_min_rad)
weight = distance_weight *angle_weight
# 頂点グループにウェイトを設定
vertex_group.add([i], weight, 'REPLACE')
weight_calc_time = time.time() - weight_calc_time_start
print(f" ウェイト計算: {weight_calc_time:.2f}秒")
# 頂点グループをアクティブに設定
cloth_obj.vertex_groups.active_index = vertex_group.index
# Weight Paintモードに切り替え
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
# スムージング処理を実行(アクティブな頂点グループに適用される)
smooth_time_start = time.time()
bpy.ops.object.vertex_group_smooth(group_select_mode='ACTIVE', factor=0.3, repeat=10, expand=0.0)
# クリーニング処理も適用
bpy.ops.object.vertex_group_clean(group_select_mode='ACTIVE', limit=0.5)
smooth_time = time.time() - smooth_time_start
print(f" スムージング処理: {smooth_time:.2f}秒")
# オブジェクトモードに戻す
bpy.ops.object.mode_set(mode='OBJECT')
# Maxフィルターを適用
print(" Maxフィルター適用中...")
apply_max_filter_to_vertex_group(cloth_obj, new_group_name, filter_radius=0.02)
# === 新しく作成された頂点グループに対するスムージング処理 ===
print(" 新しく作成された頂点グループのスムージング処理適用中...")
neighbors_cache_result = apply_smoothing_to_vertex_group(cloth_obj, new_group_name, smoothing_radius, iteration=1, use_distance_weighting=True, gaussian_falloff=True)
if smoothing_mask_groups:
# 新しく生成された頂点グループのウェイトを取得
new_group_weights = np.zeros(len(cloth_obj.data.vertices), dtype=np.float32)
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == vertex_group.index:
new_group_weights[i] = group.weight
break
# 指定された頂点グループのウェイト合計を計算
total_target_weights = np.zeros(len(cloth_obj.data.vertices), dtype=np.float32)
for target_group_name in smoothing_mask_groups:
if target_group_name in cloth_obj.vertex_groups:
target_group = cloth_obj.vertex_groups[target_group_name]
print(f" 頂点グループ '{target_group_name}' のウェイトを取得中...")
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == target_group.index:
total_target_weights[i] += group.weight
break
else:
print(f" 警告: 頂点グループ '{target_group_name}' が見つかりません")
if mask_group_name and mask_group_name in cloth_obj.vertex_groups:
mask_group = cloth_obj.vertex_groups[mask_group_name]
for i in range(len(cloth_obj.data.vertices)):
weight = 0.0
for group in cloth_obj.data.vertices[i].groups:
if group.group == mask_group.index:
weight = group.weight
break
total_target_weights[i] *= weight
# 新しい頂点グループのウェイトから合計を減算
masked_weights = np.maximum(0.0, new_group_weights * total_target_weights)
# 結果を新しい頂点グループに適用
for i in range(len(cloth_obj.data.vertices)):
vertex_group.add([i], masked_weights[i], 'REPLACE')
# === 追加処理:指定された頂点グループのウェイト処理 ===
if target_vertex_groups:
print(" 指定された頂点グループの処理開始...")
# 生成された頂点グループのウェイトを取得
mask_weights = np.zeros(len(cloth_obj.data.vertices), dtype=np.float32)
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == vertex_group.index:
mask_weights[i] = group.weight
break
# 指定された頂点グループを処理
for target_group_name in target_vertex_groups:
if target_group_name not in cloth_obj.vertex_groups:
print(f" 警告: 頂点グループ '{target_group_name}' が見つかりません")
continue
target_group = cloth_obj.vertex_groups[target_group_name]
print(f" 処理中の頂点グループ: {target_group_name}")
# 1. オリジナルのウェイトを取得
original_weights = np.zeros(len(cloth_obj.data.vertices), dtype=np.float32)
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == target_group.index:
original_weights[i] = group.weight
break
# 2. スムージング処理(original_weightsがすべて0でない場合のみ)
if np.any(original_weights > 0):
print(f" スムージング処理実行中...")
neighbors_cache_result = apply_smoothing_to_vertex_group(cloth_obj, target_group_name, smoothing_radius, iteration=3, use_distance_weighting=True, gaussian_falloff=True, neighbors_cache=neighbors_cache_result)
# 3. スムージング後のウェイトを取得
smoothed_weights = np.zeros(len(cloth_obj.data.vertices), dtype=np.float32)
for i, vertex in enumerate(cloth_obj.data.vertices):
for group in vertex.groups:
if group.group == target_group.index:
smoothed_weights[i] = group.weight
break
# 4. 合成処理
print(f" 合成処理...")
for i in range(len(cloth_obj.data.vertices)):
# 生成された頂点グループのウェイトを合成の重みとして使用
blend_factor = mask_weights[i]
# 元のウェイトとスムージング結果を合成
final_weight = original_weights[i] * (1.0 - blend_factor) + smoothed_weights[i] * blend_factor
# 最終ウェイトを設定
target_group.add([i], final_weight, 'REPLACE')
else:
print(f" スキップ: original_weightsがすべて0のため処理をスキップします")
print(f" 頂点グループ '{target_group_name}' の処理完了")
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
# BMeshをクリーンアップ
body_bm.free()
cloth_bm.free()
# キャッシュをクリーンアップ(メモリ使用量削減のため)
clear_mesh_cache()
total_time = time.time() - start_time
print(f"{new_group_name}頂点グループを作成しました (合計時間: {total_time:.2f}秒)")
return vertex_group
def process_weight_transfer(target_obj, armature, base_avatar_data, clothing_avatar_data, field_path, clothing_armature, cloth_metadata=None):
"""Process weight transfer for the target object."""
start_time = time.time()
# Humanoidボーン名からボーン名への変換マップを作成
humanoid_to_bone = {}
for bone_map in base_avatar_data.get("humanoidBones", []):
if "humanoidBoneName" in bone_map and "boneName" in bone_map:
humanoid_to_bone[bone_map["humanoidBoneName"]] = bone_map["boneName"]
# 補助ボーンのマッピングを作成
auxiliary_bones = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
humanoid_bone = aux_set["humanoidBoneName"]
auxiliary_bones[humanoid_bone] = aux_set["auxiliaryBones"]
auxiliary_bones_to_humanoid = {}
for aux_set in base_avatar_data.get("auxiliaryBones", []):
for aux_bone in aux_set["auxiliaryBones"]:
auxiliary_bones_to_humanoid[aux_bone] = aux_set["humanoidBoneName"]
finger_humanoid_bones = [
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal",
"LeftHand", "RightHand"
]
left_foot_finger_humanoid_bones = [
"LeftFootThumbProximal",
"LeftFootThumbIntermediate",
"LeftFootThumbDistal",
"LeftFootIndexProximal",
"LeftFootIndexIntermediate",
"LeftFootIndexDistal",
"LeftFootMiddleProximal",
"LeftFootMiddleIntermediate",
"LeftFootMiddleDistal",
"LeftFootRingProximal",
"LeftFootRingIntermediate",
"LeftFootRingDistal",
"LeftFootLittleProximal",
"LeftFootLittleIntermediate",
"LeftFootLittleDistal",
]
right_foot_finger_humanoid_bones = [
"RightFootThumbProximal",
"RightFootThumbIntermediate",
"RightFootThumbDistal",
"RightFootIndexProximal",
"RightFootIndexIntermediate",
"RightFootIndexDistal",
"RightFootMiddleProximal",
"RightFootMiddleIntermediate",
"RightFootMiddleDistal",
"RightFootRingProximal",
"RightFootRingIntermediate",
"RightFootRingDistal",
"RightFootLittleProximal",
"RightFootLittleIntermediate",
"RightFootLittleDistal"
]
# 指のボーンの実際のボーン名を取得
finger_bone_names = set()
for humanoid_bone in finger_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
bone_name = humanoid_to_bone[humanoid_bone]
finger_bone_names.add(bone_name)
# 関連する補助ボーンも追加
if humanoid_bone in auxiliary_bones:
for aux_bone in auxiliary_bones[humanoid_bone]:
finger_bone_names.add(aux_bone)
print(f"finger_bone_names: {finger_bone_names}")
# 指のボーンウェイトを持つ頂点を特定
finger_vertices = set()
if finger_bone_names:
mesh = target_obj.data
# 各指のボーン名に対応する頂点グループをチェック
for bone_name in finger_bone_names:
if bone_name in target_obj.vertex_groups:
for vert in mesh.vertices:
weight = 0.0
for g in vert.groups:
if target_obj.vertex_groups[g.group].name == bone_name:
weight = g.weight
break
if weight > 0.001: # 閾値以上のウェイトを持つ頂点
finger_vertices.add(vert.index)
print(f"finger_vertices: {len(finger_vertices)}")
closing_filter_mask_weights = create_blendshape_mask(target_obj, ["LeftUpperLeg", "RightUpperLeg", "Hips", "Chest", "Spine", "LeftShoulder", "RightShoulder", "LeftBreast", "RightBreast"], base_avatar_data)
def attempt_weight_transfer(source_obj, vertex_group, max_distance_try=0.2, max_distance_tried=0.0):
"""ウェイト転送を試行"""
bone_groups_tmp = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
prev_weights = store_weights(target_obj, bone_groups_tmp)
initial_max_distance = max_distance_try
while max_distance_try <= 1.0:
if max_distance_tried + 0.0001 < max_distance_try:
create_distance_normal_based_vertex_group(bpy.data.objects["Body.BaseAvatar"], target_obj, max_distance_try, 0.005, 20.0, "InpaintMask", normal_radius=0.003, filter_mask=closing_filter_mask_weights)
#デバッグ用にbpy.data.objects["Body.BaseAvatar"]をコピーしておく
# body_base_avatar_copy = bpy.data.objects["Body.BaseAvatar"].copy()
# body_base_avatar_copy.data = bpy.data.objects["Body.BaseAvatar"].data.copy()
# body_base_avatar_copy.name = "Body.BaseAvatar.Copy"
# bpy.context.scene.collection.objects.link(body_base_avatar_copy)
# target_obj_copy = target_obj.copy()
# target_obj_copy.data = target_obj.data.copy()
# target_obj_copy.name = target_obj.name + ".Copy"
# bpy.context.scene.collection.objects.link(target_obj_copy)
# current_mode = bpy.context.object.mode
# bpy.ops.object.mode_set(mode='OBJECT')
# current_active = bpy.context.active_object
# bpy.context.view_layer.objects.active = body_base_avatar_copy
# selection = bpy.context.selected_objects
# bpy.ops.object.select_all(action='DESELECT')
# body_base_avatar_copy.select_set(True)
# target_obj_copy.select_set(True)
# bpy.ops.object.convert(target='MESH')
# bpy.ops.object.select_all(action='DESELECT')
# for obj in selection:
# obj.select_set(True)
# bpy.context.view_layer.objects.active = current_active
# bpy.ops.object.mode_set(mode=current_mode)
# 指のボーンウェイトを持つ頂点がある場合、より精密なInpaintMaskを作成
# if finger_vertices and len(finger_vertices) > 0:
# # normal_radius=0.001で精密なマスクを作成(一時的な名前で)
# temp_mask_name = "TempFingerInpaintMask"
# create_distance_normal_based_vertex_group(bpy.data.objects["Body.BaseAvatar"], target_obj, max_distance_try, 0.003, 30.0, temp_mask_name, normal_radius=0.001)
# # 指の頂点のみ、精密なマスクの値で元のInpaintMaskを上書き
# if temp_mask_name in target_obj.vertex_groups and "InpaintMask" in target_obj.vertex_groups:
# temp_group = target_obj.vertex_groups[temp_mask_name]
# inpaint_group = target_obj.vertex_groups["InpaintMask"]
# for vert_idx in finger_vertices:
# vert = target_obj.data.vertices[vert_idx]
# weight = 0.0
# for g in vert.groups:
# if target_obj.vertex_groups[g.group].name == temp_mask_name:
# weight = g.weight
# break
# inpaint_group.add([vert_idx], weight, 'REPLACE')
# # 一時的なグループを削除
# # target_obj.vertex_groups.remove(temp_group)
if finger_vertices and len(finger_vertices) > 0:
# 指の頂点でInpaintMaskの値を0にする
for vert_idx in finger_vertices:
target_obj.vertex_groups["InpaintMask"].add([vert_idx], 0.0, 'REPLACE')
#MF_InpaintのウェイトをInpaintMaskのウェイトにかける
if "MF_Inpaint" in target_obj.vertex_groups and "InpaintMask" in target_obj.vertex_groups:
inpaint_group = target_obj.vertex_groups["InpaintMask"]
source_group = target_obj.vertex_groups["MF_Inpaint"]
for vert in target_obj.data.vertices:
source_weight = 0.0
for g in vert.groups:
if g.group == source_group.index:
source_weight = g.weight
break
inpaint_weight = 0.0
for g in vert.groups:
if g.group == inpaint_group.index:
inpaint_weight = g.weight
break
inpaint_group.add([vert.index], source_weight * inpaint_weight, 'REPLACE')
# vertex_groupのウェイトが0である頂点のInpaintMaskウェイトを0に設定
if "InpaintMask" in target_obj.vertex_groups and vertex_group in target_obj.vertex_groups:
inpaint_group = target_obj.vertex_groups["InpaintMask"]
source_group = target_obj.vertex_groups[vertex_group]
for vert in target_obj.data.vertices:
source_weight = 0.0
# vertex_groupのウェイトを取得
for g in vert.groups:
if g.group == source_group.index:
source_weight = g.weight
break
# ウェイトが0の場合、InpaintMaskも0に設定
if source_weight == 0.0:
inpaint_group.add([vert.index], 0.0, 'REPLACE')
try:
bpy.context.scene.robust_weight_transfer_settings.source_object = source_obj
bpy.context.object.robust_weight_transfer_settings.vertex_group = vertex_group
bpy.context.scene.robust_weight_transfer_settings.inpaint_mode = 'POINT'
bpy.context.scene.robust_weight_transfer_settings.max_distance = max_distance_try
bpy.context.scene.robust_weight_transfer_settings.use_deformed_target = True
bpy.context.scene.robust_weight_transfer_settings.use_deformed_source = True
bpy.context.scene.robust_weight_transfer_settings.enforce_four_bone_limit = True
bpy.context.scene.robust_weight_transfer_settings.max_normal_angle_difference = 1.5708
#bpy.context.scene.robust_weight_transfer_settings.max_normal_angle_difference = 0.349066
bpy.context.scene.robust_weight_transfer_settings.flip_vertex_normal = True
bpy.context.scene.robust_weight_transfer_settings.smoothing_enable = False
bpy.context.scene.robust_weight_transfer_settings.smoothing_repeat = 4
bpy.context.scene.robust_weight_transfer_settings.smoothing_factor = 0.5
bpy.context.object.robust_weight_transfer_settings.inpaint_group = "InpaintMask"
bpy.context.object.robust_weight_transfer_settings.inpaint_threshold = 0.5
bpy.context.object.robust_weight_transfer_settings.inpaint_group_invert = False
bpy.context.object.robust_weight_transfer_settings.vertex_group_invert = False
bpy.context.scene.robust_weight_transfer_settings.group_selection = 'DEFORM_POSE_BONES'
bpy.ops.object.skin_weight_transfer()
print(f"Weight transfered with max_distance {max_distance_try}")
return True, max_distance_try
except RuntimeError as e:
print(f"Weight transfer failed with max_distance {max_distance_try}: {str(e)}")
restore_weights(target_obj, prev_weights)
max_distance_try += 0.05
if max_distance_try > 1.0:
print("Max distance exceeded 1.0, stopping weight transfer attempts")
return False, initial_max_distance
return False, initial_max_distance
def get_vertex_weight_safe(group, vertex_index):
"""頂点グループからウェイトを安全に取得"""
if not group:
return 0.0
try:
for g in target_obj.data.vertices[vertex_index].groups:
if g.group == group.index:
return g.weight
except Exception:
pass
return 0.0
def propagate_weights_to_side_vertices(target_obj, bone_groups, original_humanoid_weights, clothing_armature, max_iterations=100):
"""
側面ウェイトを持つがボーンウェイトを持たない頂点にウェイトを伝播
"""
# BMeshを作成
bm = bmesh.new()
bm.from_mesh(target_obj.data)
bm.verts.ensure_lookup_table()
# 側面ウェイトグループのインデックスを取得
left_group = target_obj.vertex_groups.get("LeftSideWeights")
right_group = target_obj.vertex_groups.get("RightSideWeights")
# 衣装アーマチュアのボーングループも含めた対象グループを作成
all_deform_groups = set(bone_groups)
if clothing_armature:
all_deform_groups.update(bone.name for bone in clothing_armature.data.bones)
def get_side_weight(vert_idx, group):
"""頂点の側面ウェイトを取得"""
if not group:
return 0.0
try:
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
return g.weight
except Exception:
pass
return 0.0
def has_bone_weights(vert_idx):
"""頂点がボーンウェイトを持つかチェック(衣装のボーングループも含む)"""
for g in target_obj.data.vertices[vert_idx].groups:
if target_obj.vertex_groups[g.group].name in all_deform_groups:
return True
return False
# 処理対象の頂点を特定
vertices_to_process = set()
for vert in target_obj.data.vertices:
# 側面ウェイトがあり、ボーンウェイトを持たない頂点を特定
if (get_side_weight(vert.index, left_group) > 0 or
get_side_weight(vert.index, right_group) > 0) and not has_bone_weights(vert.index):
vertices_to_process.add(vert.index)
if not vertices_to_process:
bm.free()
return
print(f"Found {len(vertices_to_process)} vertices without bone weights but with side weights")
# ウェイト伝播の反復処理
iteration = 0
while vertices_to_process and iteration < max_iterations:
propagated_this_iteration = set()
for vert_idx in vertices_to_process:
vert = bm.verts[vert_idx]
# 隣接頂点を取得
neighbors_with_weights = []
for edge in vert.link_edges:
other = edge.other_vert(vert)
if has_bone_weights(other.index):
# 頂点間の距離を計算
distance = (vert.co - other.co).length
neighbors_with_weights.append((other.index, distance))
if neighbors_with_weights:
# 最も近い頂点を選択
closest_vert_idx = min(neighbors_with_weights, key=lambda x: x[1])[0]
# ウェイトをコピー
for group in target_obj.vertex_groups:
if group.name in all_deform_groups:
weight = 0.0
for g in target_obj.data.vertices[closest_vert_idx].groups:
if g.group == group.index:
weight = g.weight
break
if weight > 0:
group.add([vert_idx], weight, 'REPLACE')
propagated_this_iteration.add(vert_idx)
if not propagated_this_iteration:
break
print(f"Iteration {iteration + 1}: Propagated weights to {len(propagated_this_iteration)} vertices")
vertices_to_process -= propagated_this_iteration
iteration += 1
# 残りの頂点に元のウェイトを割り当て
if vertices_to_process:
print(f"Restoring original weights for {len(vertices_to_process)} remaining vertices")
for vert_idx in vertices_to_process:
if vert_idx in original_humanoid_weights:
# 現在のウェイトを削除
for group in target_obj.vertex_groups:
if group.name in all_deform_groups:
try:
group.remove([vert_idx])
except RuntimeError:
continue
# 元のウェイトを復元
for group_name, weight in original_humanoid_weights[vert_idx].items():
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups[group_name].add([vert_idx], weight, 'REPLACE')
bm.free()
print(f"処理開始: {target_obj.name}")
if "InpaintMask" not in target_obj.vertex_groups:
target_obj.vertex_groups.new(name="InpaintMask")
# 側面ウェイトグループ作成
side_weight_time_start = time.time()
create_side_weight_groups(target_obj, base_avatar_data, clothing_armature, clothing_avatar_data)
side_weight_time = time.time() - side_weight_time_start
print(f" 側面ウェイトグループ作成: {side_weight_time:.2f}秒")
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.select_all(action='DESELECT')
bpy.context.view_layer.objects.active = target_obj
# 転送前の頂点グループ名を保存
original_groups = set(vg.name for vg in target_obj.vertex_groups)
# 対象のボーングループを取得
bone_groups = get_humanoid_and_auxiliary_bone_groups(base_avatar_data)
# 元のHumanoidウェイトを保存
store_weights_time_start = time.time()
original_humanoid_weights = store_weights(target_obj, bone_groups)
store_weights_time = time.time() - store_weights_time_start
print(f" 元のウェイト保存: {store_weights_time:.2f}秒")
# 衣装アーマチュアのボーングループも含めた対象グループを作成
all_deform_groups = set(bone_groups)
if clothing_armature:
all_deform_groups.update(bone.name for bone in clothing_armature.data.bones)
# original_groupsからbone_groupsを除いたグループのウェイトを保存
original_non_humanoid_groups = all_deform_groups - bone_groups
original_non_humanoid_weights = store_weights(target_obj, original_non_humanoid_groups)
# 全てのグループのウェイトを保存
all_weights = store_weights(target_obj, all_deform_groups)
# ウェイト初期化
reset_weights_time_start = time.time()
reset_bone_weights(target_obj, all_deform_groups)
reset_weights_time = time.time() - reset_weights_time_start
print(f" ウェイト初期化: {reset_weights_time:.2f}秒")
# 左側のウェイト転送
left_transfer_time_start = time.time()
left_transfer_success, left_distance_used = attempt_weight_transfer(bpy.data.objects["Body.BaseAvatar.LeftOnly"], "LeftSideWeights")
left_transfer_time = time.time() - left_transfer_time_start
print(f" 左側ウェイト転送: {left_transfer_time:.2f}秒 (成功: {left_transfer_success}, 距離: {left_distance_used})")
failed = False
if not left_transfer_success:
print(" 左側ウェイト転送失敗のため処理中断")
failed = True
if not failed:
# 右側のウェイト転送
right_transfer_time_start = time.time()
right_transfer_success, right_distance_used = attempt_weight_transfer(bpy.data.objects["Body.BaseAvatar.RightOnly"], "RightSideWeights", max_distance_tried=left_distance_used)
right_transfer_time = time.time() - right_transfer_time_start
print(f" 右側ウェイト転送: {right_transfer_time:.2f}秒 (成功: {right_transfer_success}, 距離: {right_distance_used})")
if not right_transfer_success:
print(" 右側ウェイト転送失敗のため処理中断")
failed = True
if failed:
reset_bone_weights(target_obj, bone_groups)
restore_weights(target_obj, all_weights)
return
# MF_Armpitグループが存在し、0.001より大きいウェイトを持つ頂点があるかチェック
mf_armpit_group = target_obj.vertex_groups.get("MF_Armpit")
should_armpit_process = False
if mf_armpit_group:
for vert in target_obj.data.vertices:
for g in vert.groups:
if g.group == mf_armpit_group.index and g.weight > 0.001:
should_armpit_process = True
break
if should_armpit_process:
break
if should_armpit_process:
if armature and armature.type == 'ARMATURE':
print(" MF_Armpitグループが存在し、有効なウェイトを持つため処理を実行")
base_humanoid_weights = store_weights(target_obj, bone_groups)
reset_bone_weights(target_obj, bone_groups)
restore_weights(target_obj, all_weights)
# LeftUpperArmとRightUpperArmボーンにY軸回転を適用
print(" LeftUpperArmとRightUpperArmボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperArmとRightUpperArmのboneNameを取得
left_upper_arm_bone = None
right_upper_arm_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperArm":
left_upper_arm_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperArm":
right_upper_arm_bone = bone_map.get("boneName")
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_arm_bone and left_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_arm_bone and right_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
shape_key_state = save_shape_key_state(target_obj)
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 一時シェイプキーを作成
temp_shape_name = "WT_shape_forA.MFTemp"
if target_obj.data.shape_keys and temp_shape_name in target_obj.data.shape_keys.key_blocks:
temp_shape_key = target_obj.data.shape_keys.key_blocks[temp_shape_name]
temp_shape_key.value = 1.0
# ウェイト初期化
reset_bone_weights(target_obj, bone_groups)
# ウェイト転送
print(" ウェイト転送開始")
transfer_success, distance_used = attempt_weight_transfer(bpy.data.objects["Body.BaseAvatar"], "BothSideWeights")
restore_shape_key_state(target_obj, shape_key_state)
temp_shape_key.value = 0.0
# LeftUpperArmとRightUpperArmボーンにY軸逆回転を適用
print(" LeftUpperArmとRightUpperArmボーンにY軸逆回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperArmとRightUpperArmのboneNameを取得
left_upper_arm_bone = None
right_upper_arm_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperArm":
left_upper_arm_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperArm":
right_upper_arm_bone = bone_map.get("boneName")
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_arm_bone and left_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_arm_bone and right_upper_arm_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_arm_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-45), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
# bone_groupsのウェイトとbase_humanoid_weightsを合成
mf_armpit_group = target_obj.vertex_groups.get("MF_Armpit")
if mf_armpit_group and base_humanoid_weights:
print(" ウェイト合成処理開始")
for vert in target_obj.data.vertices:
vert_idx = vert.index
# MF_Armpitグループのウェイトを取得
mf_armpit_weight = 0.0
for g in vert.groups:
if g.group == mf_armpit_group.index:
mf_armpit_weight = g.weight
break
# 合成係数を計算
current_factor = mf_armpit_weight
base_factor = 1.0 - mf_armpit_weight
# bone_groupsに属するグループのウェイトを合成
for group_name in bone_groups:
if group_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[group_name]
# 現在のウェイトを取得
current_weight = 0.0
for g in vert.groups:
if g.group == group.index:
current_weight = g.weight
break
# base_humanoid_weightsからのウェイトを取得
base_weight = 0.0
if vert_idx in base_humanoid_weights and group_name in base_humanoid_weights[vert_idx]:
base_weight = base_humanoid_weights[vert_idx][group_name]
# ウェイトを合成:(現在のウェイト) * (MF_crotchのウェイト) + (base_humanoid_weightsでのウェイト) * (1.0 - MF_crotchのウェイト)
blended_weight = current_weight * current_factor + base_weight * base_factor
# 合成されたウェイトを適用
if blended_weight > 0.0001: # 微小値は無視
group.add([vert_idx], blended_weight, 'REPLACE')
base_humanoid_weights[vert_idx][group_name] = blended_weight
else:
try:
group.remove([vert_idx])
base_humanoid_weights[vert_idx][group_name] = 0.0
except RuntimeError:
pass
print(" ウェイト合成処理完了")
else:
print(" MF_Armpitグループが存在しないか、アーマチュアが存在しないため処理をスキップ")
else:
print(" MF_Armpitグループが存在しないか、有効なウェイトがないため処理をスキップ")
# MF_crotchグループが存在し、0.001より大きいウェイトを持つ頂点があるかチェック
mf_crotch_group = target_obj.vertex_groups.get("MF_crotch")
should_process = False
if mf_crotch_group:
for vert in target_obj.data.vertices:
for g in vert.groups:
if g.group == mf_crotch_group.index and g.weight > 0.001:
should_process = True
break
if should_process:
break
if should_process:
if armature and armature.type == 'ARMATURE':
print(" MF_crotchグループが存在し、有効なウェイトを持つため処理を実行")
base_humanoid_weights = store_weights(target_obj, bone_groups)
reset_bone_weights(target_obj, bone_groups)
restore_weights(target_obj, all_weights)
# LeftUpperLegとRightUpperLegボーンにY軸回転を適用
print(" LeftUpperLegとRightUpperLegボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperLegとRightUpperLegのboneNameを取得
left_upper_leg_bone = None
right_upper_leg_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperLeg":
left_upper_leg_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperLeg":
right_upper_leg_bone = bone_map.get("boneName")
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_leg_bone and left_upper_leg_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_leg_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-70), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_leg_bone and right_upper_leg_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_leg_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(70), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
shape_key_state = save_shape_key_state(target_obj)
for key_block in target_obj.data.shape_keys.key_blocks:
key_block.value = 0.0
# 一時シェイプキーを作成
temp_shape_name = "WT_shape_forCrotch.MFTemp"
if target_obj.data.shape_keys and temp_shape_name in target_obj.data.shape_keys.key_blocks:
temp_shape_key = target_obj.data.shape_keys.key_blocks[temp_shape_name]
temp_shape_key.value = 1.0
# ウェイト初期化
reset_bone_weights(target_obj, bone_groups)
# ウェイト転送
print(" ウェイト転送開始")
transfer_success, distance_used = attempt_weight_transfer(bpy.data.objects["Body.BaseAvatar"], "BothSideWeights")
restore_shape_key_state(target_obj, shape_key_state)
temp_shape_key.value = 0.0
# LeftUpperLegとRightUpperLegボーンにY軸逆回転を適用
print(" LeftUpperLegとRightUpperLegボーンにY軸逆回転を適用")
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperLegとRightUpperLegのboneNameを取得
left_upper_leg_bone = None
right_upper_leg_bone = None
for bone_map in base_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperLeg":
left_upper_leg_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperLeg":
right_upper_leg_bone = bone_map.get("boneName")
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_leg_bone and left_upper_leg_bone in armature.pose.bones:
bone = armature.pose.bones[left_upper_leg_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(70), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_leg_bone and right_upper_leg_bone in armature.pose.bones:
bone = armature.pose.bones[right_upper_leg_bone]
current_world_matrix = armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-70), 4, 'Y')
bone.matrix = armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.objects.active = target_obj
bpy.context.view_layer.update()
# bone_groupsのウェイトとbase_humanoid_weightsを合成
mf_crotch_group = target_obj.vertex_groups.get("MF_crotch")
if mf_crotch_group and base_humanoid_weights:
print(" ウェイト合成処理開始")
for vert in target_obj.data.vertices:
vert_idx = vert.index
# MF_crotchグループのウェイトを取得
mf_crotch_weight = 0.0
for g in vert.groups:
if g.group == mf_crotch_group.index:
mf_crotch_weight = g.weight
break
# 合成係数を計算
current_factor = mf_crotch_weight
base_factor = 1.0 - mf_crotch_weight
# bone_groupsに属するグループのウェイトを合成
for group_name in bone_groups:
if group_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[group_name]
# 現在のウェイトを取得
current_weight = 0.0
for g in vert.groups:
if g.group == group.index:
current_weight = g.weight
break
# base_humanoid_weightsからのウェイトを取得
base_weight = 0.0
if vert_idx in base_humanoid_weights and group_name in base_humanoid_weights[vert_idx]:
base_weight = base_humanoid_weights[vert_idx][group_name]
# ウェイトを合成:(現在のウェイト) * (MF_crotchのウェイト) + (base_humanoid_weightsでのウェイト) * (1.0 - MF_crotchのウェイト)
blended_weight = current_weight * current_factor + base_weight * base_factor
# 合成されたウェイトを適用
if blended_weight > 0.0001: # 微小値は無視
group.add([vert_idx], blended_weight, 'REPLACE')
else:
try:
group.remove([vert_idx])
except RuntimeError:
pass
print(" ウェイト合成処理完了")
else:
print(" MF_crotchグループが存在しないか、アーマチュアが存在しないため処理をスキップ")
else:
print(" MF_crotchグループが存在しないか、有効なウェイトがないため処理をスキップ")
bpy.ops.object.mode_set(mode='EDIT')
bpy.ops.mesh.select_mode(type="VERT")
bpy.ops.mesh.select_all(action='DESELECT')
# InpaintMaskグループのウェイトが0.5以上の頂点を選択
inpaint_mask_group = target_obj.vertex_groups.get("InpaintMask")
if inpaint_mask_group:
for vert in target_obj.data.vertices:
for g in vert.groups:
if g.group == inpaint_mask_group.index and g.weight >= 0.5:
vert.select = True
break
# bone_groupsに含まれるすべての頂点グループに対してスムージングを実行
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
bpy.context.object.data.use_paint_mask = False
bpy.context.object.data.use_paint_mask_vertex = True
for group_name in bone_groups:
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups.active = target_obj.vertex_groups[group_name]
bpy.ops.object.vertex_group_smooth(factor=0.5, repeat=3, expand=0.0)
bpy.ops.object.mode_set(mode='OBJECT')
# 微小なウェイトを除外
cleanup_weights_time_start = time.time()
for vert in target_obj.data.vertices:
groups_to_remove = []
for g in vert.groups:
group_name = target_obj.vertex_groups[g.group].name
if group_name in bone_groups and g.weight < 0.001:
groups_to_remove.append(g.group)
# 微小なウェイトを持つグループからその頂点を削除
for group_idx in groups_to_remove:
try:
target_obj.vertex_groups[group_idx].remove([vert.index])
except RuntimeError:
continue
cleanup_weights_time = time.time() - cleanup_weights_time_start
print(f" 微小ウェイト除外: {cleanup_weights_time:.2f}秒")
# Create mappings
humanoid_to_bone = {bone_map["humanoidBoneName"]: bone_map["boneName"]
for bone_map in base_avatar_data["humanoidBones"]}
bone_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in base_avatar_data["humanoidBones"]}
# 転送後の新しい頂点グループを特定
new_groups = set(vg.name for vg in target_obj.vertex_groups)
added_groups = new_groups - original_groups
print(f" ボーングループ: {bone_groups}")
print(f" オリジナルグループ: {original_groups}")
print(f" 新規グループ: {new_groups}")
print(f" 追加グループ: {added_groups}")
# 現時点での全てのグループのウェイトを保存
num_vertices = len(target_obj.data.vertices)
all_transferred_weights = store_weights(target_obj, all_deform_groups)
clothing_bone_to_humanoid = {bone_map["boneName"]: bone_map["humanoidBoneName"]
for bone_map in clothing_avatar_data["humanoidBones"]}
clothing_bone_to_parent_humanoid = {}
for clothing_bone in clothing_armature.data.bones:
current_bone = clothing_bone
current_bone_name = current_bone.name
parent_humanoid_name = None
while current_bone:
if current_bone.name in clothing_bone_to_humanoid.keys():
parent_humanoid_name = clothing_bone_to_humanoid[current_bone.name]
break
current_bone = current_bone.parent
print(f"current_bone_name: {current_bone_name}, parent_humanoid_name: {parent_humanoid_name}")
if parent_humanoid_name:
clothing_bone_to_parent_humanoid[current_bone_name] = parent_humanoid_name
non_humanoid_parts_mask = np.zeros(num_vertices)
non_humanoid_total_weights = np.zeros(num_vertices)
for vert_idx, groups in original_non_humanoid_weights.items():
total_weight = 0.0
for group_name, weight in groups.items():
total_weight += weight
if total_weight > 1.0:
total_weight = 1.0
non_humanoid_total_weights[vert_idx] = total_weight
if total_weight > 0.999:
non_humanoid_parts_mask[vert_idx] = 1.0
transferred_weight_patterns = [None] * num_vertices
for vert_idx in range(num_vertices):
groups = all_transferred_weights.get(vert_idx, {})
converted_weights = defaultdict(float)
for group_name, weight in groups.items():
if weight <= 0.0:
continue
if group_name in auxiliary_bones_to_humanoid:
humanoid_name = auxiliary_bones_to_humanoid[group_name]
if humanoid_name:
converted_weights[humanoid_name] += weight
else:
humanoid_name = bone_to_humanoid.get(group_name)
if humanoid_name:
converted_weights[humanoid_name] += weight
else:
converted_weights[group_name] += weight
transferred_weight_patterns[vert_idx] = dict(converted_weights)
original_non_humanoid_weight_patterns = [None] * num_vertices
for vert_idx in range(num_vertices):
groups = original_non_humanoid_weights.get(vert_idx, {})
converted_weights = defaultdict(float)
for group_name, weight in groups.items():
if weight <= 0.0:
continue
parent_humanoid = clothing_bone_to_parent_humanoid.get(group_name)
if parent_humanoid:
converted_weights[parent_humanoid] += weight
else:
converted_weights[group_name] += weight
original_non_humanoid_weight_patterns[vert_idx] = dict(converted_weights)
cloth_bm = get_evaluated_mesh(target_obj)
cloth_bm.verts.ensure_lookup_table()
cloth_bm.faces.ensure_lookup_table()
vertex_coords = np.array([v.co for v in cloth_bm.verts])
pattern_difference_threshold = 0.2
neighbor_search_radius = 0.005
non_humanoid_difference_mask = np.zeros_like(non_humanoid_parts_mask)
hinge_bone_mask = np.zeros_like(non_humanoid_parts_mask)
hinge_group = target_obj.vertex_groups.get("HingeBone")
if hinge_group:
for vert_idx in range(num_vertices):
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == hinge_group.index and g.weight > 0.001:
hinge_bone_mask[vert_idx] = 1.0
break
if num_vertices > 0:
kd_tree = cKDTree(vertex_coords)
def calculate_pattern_difference(weights_a, weights_b):
if not weights_a and not weights_b:
return 0.0
keys = set(weights_a.keys()) | set(weights_b.keys())
difference = 0.0
for key in keys:
difference += abs(weights_a.get(key, 0.0) - weights_b.get(key, 0.0))
return difference
for vert_idx, mask_value in enumerate(non_humanoid_parts_mask):
if mask_value <= 0.0:
continue
base_pattern = original_non_humanoid_weight_patterns[vert_idx]
neighbor_indices = kd_tree.query_ball_point(vertex_coords[vert_idx], neighbor_search_radius)
for neighbor_idx in neighbor_indices:
if neighbor_idx == vert_idx:
continue
if non_humanoid_parts_mask[neighbor_idx] > 0.001:
continue
neighbor_pattern = transferred_weight_patterns[neighbor_idx]
if not neighbor_pattern:
continue
difference = calculate_pattern_difference(base_pattern, neighbor_pattern)
if difference > pattern_difference_threshold:
non_humanoid_difference_mask[vert_idx] = 1.0 * hinge_bone_mask[vert_idx] * hinge_bone_mask[vert_idx]
break
# non_humanoid_difference_maskを頂点グループとして追加
non_humanoid_difference_group = target_obj.vertex_groups.new(name="NonHumanoidDifference")
for vert_idx, mask_value in enumerate(non_humanoid_difference_mask):
if mask_value > 0.0:
non_humanoid_difference_group.add([vert_idx], 1.0, 'REPLACE')
# 現在のモードを保存
current_mode = bpy.context.object.mode
# Weight Paintモードに切り替え
bpy.context.view_layer.objects.active = target_obj
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
target_obj.vertex_groups.active_index = non_humanoid_difference_group.index
bpy.ops.paint.vert_select_all(action='SELECT')
# vertex_group_smoothを使用してスムージング
bpy.ops.object.vertex_group_smooth(factor=0.5, repeat=5, expand=0.5)
# DistanceFalloffMaskグループを作成
falloff_mask_time_start = time.time()
# CommonSwaySettingsから距離パラメータを取得
sway_settings = base_avatar_data.get("commonSwaySettings", {"startDistance": 0.025, "endDistance": 0.050})
distance_falloff_group = create_distance_falloff_transfer_mask(target_obj, base_avatar_data, 'DistanceFalloffMask',
max_distance=sway_settings["endDistance"],
min_distance=sway_settings["startDistance"])
target_obj.vertex_groups.active_index = distance_falloff_group.index
# vertex_group_smoothを使用してスムージング
bpy.ops.object.vertex_group_smooth(factor=1, repeat=3, expand=0.1)
falloff_mask_time = time.time() - falloff_mask_time_start
print(f" 距離フォールオフマスク作成: {falloff_mask_time:.2f}秒")
distance_falloff_group2 = create_distance_falloff_transfer_mask(target_obj, base_avatar_data, 'DistanceFalloffMask2',
max_distance=0.1,
min_distance=0.04)
target_obj.vertex_groups.active_index = distance_falloff_group2.index
# vertex_group_smoothを使用してスムージング
bpy.ops.object.vertex_group_smooth(factor=1, repeat=3, expand=0.1)
print(f" distance_falloff_group2: {distance_falloff_group2.index}")
# 元のモードに戻す
bpy.ops.object.mode_set(mode=current_mode)
non_humanoid_difference_weights = np.zeros(num_vertices)
distance_falloff_weights = np.zeros(num_vertices)
for vert_idx in range(num_vertices):
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == non_humanoid_difference_group.index:
non_humanoid_difference_weights[vert_idx] = g.weight
if g.group == distance_falloff_group2.index:
distance_falloff_weights[vert_idx] = g.weight
for vert_idx, groups in original_non_humanoid_weights.items():
for group_name, weight in groups.items():
if group_name in target_obj.vertex_groups:
result_weight = weight * ( 1.0 - non_humanoid_difference_weights[vert_idx] * distance_falloff_weights[vert_idx] )
target_obj.vertex_groups[group_name].add([vert_idx], result_weight, 'REPLACE')
current_humanoid_weights = store_weights(target_obj, bone_groups)
for vert_idx, groups in current_humanoid_weights.items():
for group_name, weight in groups.items():
if group_name in target_obj.vertex_groups:
factor = ( 1.0 - non_humanoid_total_weights[vert_idx] * (1.0 - non_humanoid_difference_weights[vert_idx] * distance_falloff_weights[vert_idx]) )
result_weight = weight * factor
target_obj.vertex_groups[group_name].add([vert_idx], result_weight, 'REPLACE')
for vert_idx in range(len(non_humanoid_total_weights)):
non_humanoid_total_weights[vert_idx] = non_humanoid_total_weights[vert_idx] * (1.0 - non_humanoid_difference_weights[vert_idx] * distance_falloff_weights[vert_idx])
cloth_bm.free()
# 各新規グループに対して親ボーンを見つけてウェイトを統合
group_merge_time_start = time.time()
max_iterations = 5
iteration = 0
while added_groups and iteration < max_iterations:
changed = False
remaining_groups = set()
print(f" 反復処理: {iteration}")
for group_name in added_groups:
print(f" グループ名: {group_name}")
if group_name not in target_obj.vertex_groups:
print(f" {group_name} は削除されています。スキップします")
continue
# 新規グループのウェイトが0より大きい頂点を取得
group = target_obj.vertex_groups[group_name]
verts_with_weight = []
for v in target_obj.data.vertices:
weight = get_vertex_weight_safe(group, v.index)
if weight > 0:
verts_with_weight.append(v)
print(f" ウェイトを持つ頂点数: {len(verts_with_weight)}")
if len(verts_with_weight) == 0:
print(f" {group_name} は空: スキップします")
continue
if group_name in bone_to_humanoid:
humanoid_group_name = bone_to_humanoid[group_name]
if "LeftToes" in humanoid_to_bone and humanoid_to_bone["LeftToes"] in original_groups:
if humanoid_group_name in left_foot_finger_humanoid_bones:
merge_weights_to_parent(target_obj, group_name, humanoid_to_bone["LeftToes"])
changed = True
continue
if "RightToes" in humanoid_to_bone and humanoid_to_bone["RightToes"] in original_groups:
if humanoid_group_name in right_foot_finger_humanoid_bones:
merge_weights_to_parent(target_obj, group_name, humanoid_to_bone["RightToes"])
changed = True
continue
# 該当する既存グループを探す
existing_groups = set()
for vert in verts_with_weight:
for g in vert.groups:
g_name = target_obj.vertex_groups[g.group].name
if g_name in bone_groups and g_name in original_groups and g.weight > 0:
existing_groups.add(g_name)
print(f" 既存グループ: {existing_groups}")
if len(existing_groups) == 1:
# 一つだけ該当する既存グループがある場合はそれに統合
merge_weights_to_parent(target_obj, group_name, list(existing_groups)[0])
changed = True
elif len(existing_groups) == 0:
# 該当する既存グループがない場合は隣接頂点も探索
bm = bmesh.new()
bm.from_mesh(target_obj.data)
bm.verts.ensure_lookup_table()
visited_verts = set(vert.index for vert in verts_with_weight)
queue = deque(verts_with_weight)
while queue:
vert = queue.popleft()
for edge in bm.verts[vert.index].link_edges:
other_vert = edge.other_vert(bm.verts[vert.index])
if other_vert.index not in visited_verts:
visited_verts.add(other_vert.index)
for g in target_obj.data.vertices[other_vert.index].groups:
if target_obj.vertex_groups[g.group].name in bone_groups and g.weight > 0:
existing_groups.add(target_obj.vertex_groups[g.group].name)
if len(existing_groups) > 1:
break
if len(existing_groups) == 1:
merge_weights_to_parent(target_obj, group_name, existing_groups.pop())
changed = True
break
queue.append(target_obj.data.vertices[other_vert.index])
bm.free()
print(f" 隣接探索後の既存グループ: {existing_groups}")
if len(existing_groups) != 1:
remaining_groups.add(group_name)
if not changed:
break
added_groups = remaining_groups
iteration += 1
group_merge_time = time.time() - group_merge_time_start
print(f" グループ統合処理: {group_merge_time:.2f}秒")
# 統合できなかった新規グループについて補助ボーンの処理を行う
aux_bone_time_start = time.time()
for group_name in list(added_groups): # Setのコピーを作成してイテレーション
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if group_name in aux_set["auxiliaryBones"]:
humanoid_bone = aux_set["humanoidBoneName"]
if humanoid_bone in humanoid_to_bone and humanoid_to_bone[humanoid_bone] in bone_groups:
merge_weights_to_parent(target_obj, group_name, humanoid_to_bone[humanoid_bone])
try:
added_groups.remove(group_name)
except KeyError:
pass # group_nameが既に削除されている場合を無視
break
# それでも統合できなかった新規グループについてオリジナルのウェイトを加算
for group_name in added_groups:
if group_name not in target_obj.vertex_groups:
continue
group = target_obj.vertex_groups[group_name]
for vert in target_obj.data.vertices:
weight = get_vertex_weight_safe(group, vert.index)
if weight > 0:
for orig_group_name, orig_weight in original_humanoid_weights[vert.index].items():
if orig_group_name in target_obj.vertex_groups:
target_obj.vertex_groups[orig_group_name].add([vert.index], orig_weight * weight, 'ADD')
# 新規グループを削除
for group_name in added_groups:
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups.remove(target_obj.vertex_groups[group_name])
aux_bone_time = time.time() - aux_bone_time_start
print(f" 補助ボーン処理: {aux_bone_time:.2f}秒")
# 現在のウェイトを結果Aとして保存
store_result_a_time_start = time.time()
weights_a = {}
for vert_idx in range(len(target_obj.data.vertices)):
weights_a[vert_idx] = {}
for group in target_obj.vertex_groups:
if group.name in bone_groups:
try:
weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
weight = g.weight
break
weights_a[vert_idx][group.name] = weight
except Exception:
continue
store_result_a_time = time.time() - store_result_a_time_start
print(f" 結果A保存: {store_result_a_time:.2f}秒")
# 現在のウェイトをコピーして結果Bを作成
store_result_b_time_start = time.time()
weights_b = {}
for vert_idx in range(len(target_obj.data.vertices)):
weights_b[vert_idx] = {}
for group in target_obj.vertex_groups:
if group.name in bone_groups:
try:
weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
weight = g.weight
break
weights_b[vert_idx][group.name] = weight
except Exception:
continue
store_result_b_time = time.time() - store_result_b_time_start
print(f" 結果B保存: {store_result_b_time:.2f}秒")
# swayBonesの統合処理
sway_bones_time_start = time.time()
for sway_bone in base_avatar_data.get("swayBones", []):
parent_bone = sway_bone["parentBoneName"]
for affected_bone in sway_bone["affectedBones"]:
# 各頂点について処理
for vert_idx in weights_b:
if affected_bone in weights_b[vert_idx]:
affected_weight = weights_b[vert_idx][affected_bone]
# 親ボーンのウェイトに加算
if parent_bone not in weights_b[vert_idx]:
weights_b[vert_idx][parent_bone] = 0.0
weights_b[vert_idx][parent_bone] += affected_weight
# affected_boneのウェイトを削除
del weights_b[vert_idx][affected_bone]
sway_bones_time = time.time() - sway_bones_time_start
print(f" SwayBones処理: {sway_bones_time:.2f}秒")
# 結果AとBを合成
weight_blend_time_start = time.time()
for vert_idx in range(len(target_obj.data.vertices)):
# DistanceFalloffMaskのウェイトを取得
falloff_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == distance_falloff_group.index:
falloff_weight = g.weight
break
# 各頂点グループについて処理
for group_name in bone_groups:
if group_name in target_obj.vertex_groups:
weight_a = weights_a[vert_idx].get(group_name, 0.0)
weight_b = weights_b[vert_idx].get(group_name, 0.0)
# ウェイトを合成
final_weight = (weight_a * falloff_weight) + (weight_b * (1.0 - falloff_weight))
# 新しいウェイトを設定
group = target_obj.vertex_groups[group_name]
if final_weight > 0:
group.add([vert_idx], final_weight, 'REPLACE')
else:
try:
group.remove([vert_idx])
except RuntimeError:
pass
weight_blend_time = time.time() - weight_blend_time_start
print(f" ウェイト合成: {weight_blend_time:.2f}秒")
# 手のウェイト調整
hand_weights_time_start = time.time()
adjust_hand_weights(target_obj, armature, base_avatar_data)
hand_weights_time = time.time() - hand_weights_time_start
print(f" 手のウェイト調整: {hand_weights_time:.2f}秒")
#normalize_connected_components_weights(target_obj, base_avatar_data)
# 側面頂点へのウェイト伝播
propagate_time_start = time.time()
propagate_weights_to_side_vertices(target_obj, bone_groups, original_humanoid_weights, clothing_armature)
propagate_time = time.time() - propagate_time_start
print(f" 側面頂点へのウェイト伝播: {propagate_time:.2f}秒")
# サイドウェイトとボーンウェイトの比較と調整
comparison_time_start = time.time()
side_left_group = target_obj.vertex_groups.get("LeftSideWeights")
side_right_group = target_obj.vertex_groups.get("RightSideWeights")
failed_vertices_count = 0
if side_left_group and side_right_group:
for vert in target_obj.data.vertices:
# サイドウェイトの合計を計算
total_side_weight = 0.0
for g in vert.groups:
if g.group == side_left_group.index or g.group == side_right_group.index:
total_side_weight += g.weight
total_side_weight = min(total_side_weight, 1.0) # 0-1にクランプ
total_side_weight = total_side_weight - non_humanoid_total_weights[vert.index]
total_side_weight = max(total_side_weight, 0.0)
# bone_groupsの合計ウェイトを計算
total_bone_weight = 0.0
for g in vert.groups:
group_name = target_obj.vertex_groups[g.group].name
if group_name in bone_groups:
total_bone_weight += g.weight
# サイドウェイトがボーンウェイトより0.5以上大きい場合
if total_side_weight > total_bone_weight + 0.5:
# 現在のbone_groupsのウェイトを消去
for group in target_obj.vertex_groups:
if group.name in bone_groups:
try:
group.remove([vert.index])
except RuntimeError:
continue
# 元のウェイトを復元
if vert.index in original_humanoid_weights:
for group_name, weight in original_humanoid_weights[vert.index].items():
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups[group_name].add([vert.index], weight, 'REPLACE')
failed_vertices_count += 1
if failed_vertices_count > 0:
print(f" ウェイト転送失敗: {failed_vertices_count}頂点 -> オリジナルウェイトにフォールバック")
comparison_time = time.time() - comparison_time_start
print(f" サイドウェイト比較調整: {comparison_time:.2f}秒")
# apply_distance_normal_based_smoothingを実行
smoothing_time_start = time.time()
# target_vertex_groupsを構築(Chest, LeftBreast, RightBreastとそれらのauxiliaryBones)
target_vertex_groups = []
smoothing_mask_groups = []
target_humanoid_bones = [
"Chest", "LeftBreast", "RightBreast", "Neck", "Head", "LeftShoulder", "RightShoulder", "LeftUpperArm", "RightUpperArm",
"LeftHand",
"LeftThumbProximal", "LeftThumbIntermediate", "LeftThumbDistal",
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal",
"RightHand",
"RightThumbProximal", "RightThumbIntermediate", "RightThumbDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal"
]
smoothing_mask_humanoid_bones = [
"Chest", "LeftBreast", "RightBreast", "Neck", "Head", "LeftShoulder", "RightShoulder",
"LeftHand",
"LeftThumbProximal", "LeftThumbIntermediate", "LeftThumbDistal",
"LeftIndexProximal", "LeftIndexIntermediate", "LeftIndexDistal",
"LeftMiddleProximal", "LeftMiddleIntermediate", "LeftMiddleDistal",
"LeftRingProximal", "LeftRingIntermediate", "LeftRingDistal",
"LeftLittleProximal", "LeftLittleIntermediate", "LeftLittleDistal",
"RightHand",
"RightThumbProximal", "RightThumbIntermediate", "RightThumbDistal",
"RightIndexProximal", "RightIndexIntermediate", "RightIndexDistal",
"RightMiddleProximal", "RightMiddleIntermediate", "RightMiddleDistal",
"RightRingProximal", "RightRingIntermediate", "RightRingDistal",
"RightLittleProximal", "RightLittleIntermediate", "RightLittleDistal"
]
humanoid_to_bone = {bone_map["humanoidBoneName"]: bone_map["boneName"]
for bone_map in base_avatar_data["humanoidBones"]}
for humanoid_bone in target_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
target_vertex_groups.append(humanoid_to_bone[humanoid_bone])
# auxiliaryBonesを追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] in target_humanoid_bones:
target_vertex_groups.extend(aux_set["auxiliaryBones"])
for humanoid_bone in smoothing_mask_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
smoothing_mask_groups.append(humanoid_to_bone[humanoid_bone])
# auxiliaryBonesを追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] in smoothing_mask_humanoid_bones:
smoothing_mask_groups.extend(aux_set["auxiliaryBones"])
# Body.BaseAvatarオブジェクトを取得
body_obj = bpy.data.objects.get("Body.BaseAvatar")
# LeftBreastまたはRightBreastのボーンウェイトが0でない頂点があるかチェック
breast_bone_groups = []
breast_humanoid_bones = ["Hips", "LeftBreast", "RightBreast", "Neck", "Head", "LeftHand", "RightHand"]
for humanoid_bone in breast_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
breast_bone_groups.append(humanoid_to_bone[humanoid_bone])
# LeftBreastとRightBreastのauxiliaryBonesも追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] in breast_humanoid_bones:
breast_bone_groups.extend(aux_set["auxiliaryBones"])
# target_objにbreast_bone_groupsのウェイトが0でない頂点があるかチェック
has_breast_weights = False
if breast_bone_groups:
for group_name in breast_bone_groups:
if group_name in target_obj.vertex_groups:
group = target_obj.vertex_groups[group_name]
# 頂点グループに実際にウェイトがあるかチェック
for vert in target_obj.data.vertices:
try:
weight = 0.0
for g in vert.groups:
if g.group == group.index:
weight = g.weight
break
if weight > 0:
has_breast_weights = True
break
except RuntimeError:
continue
if has_breast_weights:
break
if body_obj and target_vertex_groups and has_breast_weights:
print(f" 距離・法線ベースのスムージングを実行: {len(target_vertex_groups)}個のターゲットグループ (LeftBreast/RightBreastウェイト検出)")
apply_distance_normal_based_smoothing(
body_obj=body_obj,
cloth_obj=target_obj,
distance_min=0.005,
distance_max=0.015,
angle_min=15.0,
angle_max=30.0,
new_group_name="SmoothMask",
normal_radius=0.01,
smoothing_mask_groups=smoothing_mask_groups,
target_vertex_groups=target_vertex_groups,
smoothing_radius=0.05,
mask_group_name="MF_Blur"
)
else:
print(" Body.BaseAvatarオブジェクトが見つからないか、ターゲットグループが空です")
smoothing_time = time.time() - smoothing_time_start
print(f" 距離・法線ベースのスムージング: {smoothing_time:.2f}秒")
# 距離が大きいほどオリジナルのウェイトの比率を高めて合成するように調整
current_mode = bpy.context.object.mode
bpy.context.view_layer.objects.active = target_obj
bpy.ops.object.mode_set(mode='WEIGHT_PAINT')
target_obj.vertex_groups.active_index = distance_falloff_group2.index
print(f" distance_falloff_group2: {distance_falloff_group2.index}")
print(f" distance_falloff_group2_index: {target_obj.vertex_groups[distance_falloff_group2.name].index}")
# LeftBreastまたはRightBreastのボーンウェイトが0でない頂点があるかチェック
exclude_bone_groups = []
exclude_humanoid_bones = ["LeftBreast", "RightBreast"]
for humanoid_bone in exclude_humanoid_bones:
if humanoid_bone in humanoid_to_bone:
exclude_bone_groups.append(humanoid_to_bone[humanoid_bone])
# LeftBreastとRightBreastのauxiliaryBonesも追加
for aux_set in base_avatar_data.get("auxiliaryBones", []):
if aux_set["humanoidBoneName"] in exclude_humanoid_bones:
exclude_bone_groups.extend(aux_set["auxiliaryBones"])
# 胸部分は合成処理から除外する
if exclude_bone_groups:
new_group_weights = np.zeros(len(target_obj.data.vertices), dtype=np.float32)
for i, vertex in enumerate(target_obj.data.vertices):
for group in vertex.groups:
if group.group == distance_falloff_group2.index:
new_group_weights[i] = group.weight
break
total_target_weights = np.zeros(len(target_obj.data.vertices), dtype=np.float32)
for target_group_name in exclude_bone_groups:
if target_group_name in target_obj.vertex_groups:
target_group = target_obj.vertex_groups[target_group_name]
print(f" 頂点グループ '{target_group_name}' のウェイトを取得中...")
for i, vertex in enumerate(target_obj.data.vertices):
for group in vertex.groups:
if group.group == target_group.index:
total_target_weights[i] += group.weight
break
else:
print(f" 警告: 頂点グループ '{target_group_name}' が見つかりません")
masked_weights = np.maximum(new_group_weights, total_target_weights)
# 結果を新しい頂点グループに適用
for i in range(len(target_obj.data.vertices)):
distance_falloff_group2.add([i], masked_weights[i], 'REPLACE')
for vert_idx in range(len(target_obj.data.vertices)):
if vert_idx in original_humanoid_weights and non_humanoid_parts_mask[vert_idx] < 0.0001:
falloff_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == distance_falloff_group2.index:
falloff_weight = g.weight
break
for g in target_obj.data.vertices[vert_idx].groups:
if target_obj.vertex_groups[g.group].name in bone_groups:
weight = g.weight
group_name = target_obj.vertex_groups[g.group].name
target_obj.vertex_groups[group_name].add([vert_idx], weight * falloff_weight, 'REPLACE')
for group_name, weight in original_humanoid_weights[vert_idx].items():
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups[group_name].add([vert_idx], weight * (1.0 - falloff_weight), 'ADD')
bpy.ops.object.mode_set(mode=current_mode)
# Headボーンのウェイトをオリジナルに戻す処理
head_time_start = time.time()
head_bone_name = None
# base_avatar_dataからHeadラベルを持つボーンを検索
if base_avatar_data:
if "humanoidBones" in base_avatar_data:
for bone_data in base_avatar_data["humanoidBones"]:
if bone_data.get("humanoidBoneName", "") == "Head":
head_bone_name = bone_data.get("boneName", "")
break
if head_bone_name and head_bone_name in target_obj.vertex_groups:
print(f" Headボーンウェイトを処理中: {head_bone_name}")
head_vertices_count = 0
for vert_idx in range(len(target_obj.data.vertices)):
# オリジナルのHeadウェイトを取得
original_head_weight = 0.0
if vert_idx in original_humanoid_weights:
original_head_weight = original_humanoid_weights[vert_idx].get(head_bone_name, 0.0)
# 現在のHeadウェイトを取得
current_head_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == target_obj.vertex_groups[head_bone_name].index:
current_head_weight = g.weight
break
# Headウェイトの差分を計算
head_weight_diff = original_head_weight - current_head_weight
# Headウェイトをオリジナルの値に設定
if original_head_weight > 0.0:
target_obj.vertex_groups[head_bone_name].add([vert_idx], original_head_weight, 'REPLACE')
else:
# オリジナルが0の場合は削除
try:
target_obj.vertex_groups[head_bone_name].remove([vert_idx])
except RuntimeError:
pass
# 差分がある場合、他のボーンのオリジナルウェイトに差分を掛けて加算
if abs(head_weight_diff) > 0.0001 and vert_idx in original_humanoid_weights:
for group in target_obj.vertex_groups:
if group.name in bone_groups and group.name != head_bone_name:
# オリジナルウェイトを取得
original_weight = original_humanoid_weights[vert_idx].get(group.name, 0.0)
if original_weight > 0.0:
# 現在のウェイトを取得
current_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
current_weight = g.weight
break
# 差分に基づいて加算
new_weight = current_weight + (original_weight * head_weight_diff)
if new_weight > 0.0:
group.add([vert_idx], new_weight, 'REPLACE')
else:
try:
group.remove([vert_idx])
except RuntimeError:
pass
# 最終的にall_deform_groupsのウェイト合計が1未満の場合、埋め合わせる
total_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
group_name = target_obj.vertex_groups[g.group].name
if group_name in all_deform_groups:
total_weight += g.weight
# ウェイト合計が1未満の場合、不足分を埋め合わせる
if total_weight < 0.9999 and vert_idx in original_humanoid_weights:
weight_shortage = 1.0 - total_weight
for group in target_obj.vertex_groups:
if group.name in bone_groups:
# オリジナルウェイトを取得
original_weight = original_humanoid_weights[vert_idx].get(group.name, 0.0)
if original_weight > 0.0:
# 現在のウェイトを取得
current_weight = 0.0
for g in target_obj.data.vertices[vert_idx].groups:
if g.group == group.index:
current_weight = g.weight
break
# 不足分をオリジナルウェイトに基づいて加算
additional_weight = original_weight * weight_shortage
new_weight = current_weight + additional_weight
group.add([vert_idx], new_weight, 'REPLACE')
head_vertices_count += 1
if head_vertices_count > 0:
print(f" Headウェイト処理完了: {head_vertices_count}頂点")
head_time = time.time() - head_time_start
print(f" Headウェイト処理: {head_time:.2f}秒")
# clothMetadataに基づいてウェイトを選択的に元に戻す
metadata_time_start = time.time()
if cloth_metadata:
mesh_name = target_obj.name
if mesh_name in cloth_metadata:
vertex_max_distances = cloth_metadata[mesh_name]
print(f" メッシュのクロスメタデータを処理: {mesh_name}")
count = 0
# 各頂点について処理
for vert_idx in range(len(target_obj.data.vertices)):
# maxDistanceを取得(ない場合は10.0を使用)
max_distance = float(vertex_max_distances.get(str(vert_idx), 10.0))
# maxDistanceが1.0より大きい場合、元のウェイトを復元
if max_distance > 1.0:
if vert_idx in original_humanoid_weights:
# 現在のグループをすべて削除
for group in target_obj.vertex_groups:
if group.name in bone_groups:
try:
group.remove([vert_idx])
except RuntimeError:
continue
# 元のウェイトを復元
for group_name, weight in original_humanoid_weights[vert_idx].items():
if group_name in target_obj.vertex_groups:
target_obj.vertex_groups[group_name].add([vert_idx], weight, 'REPLACE')
count += 1
print(f" 処理された頂点数: {count}")
metadata_time = time.time() - metadata_time_start
print(f" クロスメタデータ処理: {metadata_time:.2f}秒")
total_time = time.time() - start_time
print(f"処理完了: {target_obj.name} - 合計時間: {total_time:.2f}秒")
def apply_pose_as_rest(armature):
# アクティブなオブジェクトを保存
original_active = bpy.context.active_object
# 指定されたアーマチュアを取得
if not armature or armature.type != 'ARMATURE':
print(f"Error: {armature.name} is not a valid armature object")
return
# アーマチュアをアクティブに設定
bpy.context.view_layer.objects.active = armature
# 編集モードに入る
bpy.ops.object.mode_set(mode='POSE')
bpy.ops.pose.select_all(action='SELECT')
# 現在のポーズをレストポーズとして適用
bpy.ops.pose.armature_apply()
# 元のモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# 元のアクティブオブジェクトを復元
bpy.context.view_layer.objects.active = original_active
def apply_all_transforms():
"""Apply transforms to all objects while maintaining world space positions"""
bpy.ops.object.mode_set(mode='OBJECT')
# 選択状態を保存
original_selection = {obj: obj.select_get() for obj in bpy.data.objects}
original_active = bpy.context.view_layer.objects.active
# すべてのオブジェクトを取得し、親子関係の深さでソート
def get_object_depth(obj):
depth = 0
parent = obj.parent
while parent:
depth += 1
parent = parent.parent
return depth
# 深い階層から順番に処理するためにソート
all_objects = sorted(bpy.data.objects, key=get_object_depth, reverse=True)
# 親子関係情報を保存するリスト
parent_info_list = []
# 第1段階: すべてのオブジェクトで親子関係を解除してTransformを適用
for obj in all_objects:
if obj.type not in {'MESH', 'EMPTY', 'ARMATURE', 'CURVE', 'SURFACE', 'FONT'}:
continue
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
# 現在のオブジェクトを選択してアクティブに
obj.select_set(True)
bpy.context.view_layer.objects.active = obj
# 親子関係情報を保存
parent = obj.parent
parent_type = obj.parent_type
parent_bone = obj.parent_bone if parent_type == 'BONE' else None
if parent:
parent_info_list.append({
'obj': obj,
'parent': parent,
'parent_type': parent_type,
'parent_bone': parent_bone
})
# 親子関係を一時的に解除(位置は保持)
if parent:
bpy.ops.object.parent_clear(type='CLEAR_KEEP_TRANSFORM')
# Armatureオブジェクトまたは Armature モディファイアを持つMeshオブジェクトの場合
has_armature = obj.type == 'ARMATURE' or \
(obj.type == 'MESH' and any(mod.type == 'ARMATURE' for mod in obj.modifiers))
if has_armature:
# すべての Transform を適用
bpy.ops.object.transform_apply(location=True, rotation=True, scale=True)
else:
# スケールのみ適用
bpy.ops.object.transform_apply(location=False, rotation=False, scale=True)
# 第2段階: すべての親子関係をまとめて復元
for parent_info in parent_info_list:
obj = parent_info['obj']
parent = parent_info['parent']
parent_type = parent_info['parent_type']
parent_bone = parent_info['parent_bone']
# すべての選択を解除
bpy.ops.object.select_all(action='DESELECT')
if parent_type == 'BONE' and parent_bone:
# ボーン親だった場合
obj.select_set(True)
bpy.context.view_layer.objects.active = parent
parent.select_set(True)
# ポーズモードに切り替えてボーンをアクティブに設定
bpy.ops.object.mode_set(mode='POSE')
parent.data.bones.active = parent.data.bones[parent_bone]
# オブジェクトモードに戻る
bpy.ops.object.mode_set(mode='OBJECT')
# ボーンペアレントを設定
bpy.ops.object.parent_set(type='BONE', keep_transform=True)
print(f"Restored bone parent '{parent_bone}' for object '{obj.name}'")
else:
# オブジェクト親だった場合
obj.select_set(True)
parent.select_set(True)
bpy.context.view_layer.objects.active = parent
bpy.ops.object.parent_set(type='OBJECT', keep_transform=True)
# 元の選択状態を復元
for obj, was_selected in original_selection.items():
obj.select_set(was_selected)
bpy.context.view_layer.objects.active = original_active
def rename_shape_keys_from_mappings(meshes, blend_shape_mappings):
"""
辞書データに基づいてメッシュのシェイプキー名を置き換える
辞書の値(カスタム名)と一致するシェイプキーがあれば、
それをキー(ラベル名)に置き換える
Parameters:
meshes: メッシュオブジェクトのリスト
blend_shape_mappings: {label: customName} の辞書
"""
if not blend_shape_mappings:
return
# 逆マッピングを作成(customName -> label)
reverse_mappings = {custom_name: label for label, custom_name in blend_shape_mappings.items()}
for obj in meshes:
if not obj.data.shape_keys:
continue
# 名前を変更する必要があるシェイプキーを収集
keys_to_rename = []
for shape_key in obj.data.shape_keys.key_blocks:
if shape_key.name in reverse_mappings:
new_name = reverse_mappings[shape_key.name]
keys_to_rename.append((shape_key, new_name))
# 名前を変更
for shape_key, new_name in keys_to_rename:
old_name = shape_key.name
shape_key.name = new_name
print(f"Renamed shape key: {old_name} -> {new_name} on mesh {obj.name}")
def merge_and_clean_generated_shapekeys(clothing_meshes, blend_shape_labels=None):
"""
apply_blendshape_deformation_fieldsで作成されたシェイプキーを削除し、
_generatedサフィックス付きシェイプキーを処理する
_generatedで終わるシェイプキー名から_generatedを除いた名前のシェイプキーが存在する場合、
そのシェイプキーを_generatedシェイプキーの内容で上書きして、_generatedシェイプキーを削除する
Parameters:
clothing_meshes: 衣装メッシュのリスト
blend_shape_labels: ブレンドシェイプラベルのリスト
"""
for obj in clothing_meshes:
if not obj.data.shape_keys:
continue
# _generatedサフィックス付きシェイプキーの処理
generated_shape_keys = []
for shape_key in obj.data.shape_keys.key_blocks:
if shape_key.name.endswith("_generated"):
generated_shape_keys.append(shape_key.name)
# _generatedシェイプキーを対応するベースシェイプキーに統合
for generated_name in generated_shape_keys:
base_name = generated_name[:-10] # "_generated"を除去
generated_key = obj.data.shape_keys.key_blocks.get(generated_name)
base_key = obj.data.shape_keys.key_blocks.get(base_name)
if generated_key and base_key:
# generatedシェイプキーの内容でベースシェイプキーを上書き
for i, point in enumerate(generated_key.data):
base_key.data[i].co = point.co
print(f"Merged {generated_name} into {base_name} for {obj.name}")
# generatedシェイプキーを削除
obj.shape_key_remove(generated_key)
print(f"Removed generated shape key: {generated_name} from {obj.name}")
# 従来の機能: blend_shape_labelsで指定されたシェイプキーの削除
if blend_shape_labels:
shape_keys_to_remove = []
for label in blend_shape_labels:
shape_key_name = f"{label}_BaseShape"
if shape_key_name in obj.data.shape_keys.key_blocks:
shape_keys_to_remove.append(shape_key_name)
for label in blend_shape_labels:
shape_key_name = f"{label}_temp"
if shape_key_name in obj.data.shape_keys.key_blocks:
shape_keys_to_remove.append(shape_key_name)
# シェイプキーを削除
for shape_key_name in shape_keys_to_remove:
shape_key = obj.data.shape_keys.key_blocks.get(shape_key_name)
if shape_key:
obj.shape_key_remove(shape_key)
print(f"Removed shape key: {shape_key_name} from {obj.name}")
# 不要なシェイプキーを削除
shape_keys_to_remove = []
for shape_key in obj.data.shape_keys.key_blocks:
if shape_key.name.endswith(".MFTemp"):
shape_keys_to_remove.append(shape_key.name)
for shape_key_name in shape_keys_to_remove:
shape_key = obj.data.shape_keys.key_blocks.get(shape_key_name)
if shape_key:
obj.shape_key_remove(shape_key)
print(f"Removed shape key: {shape_key_name} from {obj.name}")
def set_highheel_shapekey_values(clothing_meshes, blend_shape_labels=None, base_avatar_data=None):
"""
Highheelを含むシェイプキーの値を1にする
Parameters:
clothing_meshes: 衣装メッシュのリスト
blend_shape_labels: ブレンドシェイプラベルのリスト
base_avatar_data: ベースアバターデータ
"""
if not blend_shape_labels or not base_avatar_data:
return
# base_avatar_dataのblendShapeFieldsの存在確認
if "blendShapeFields" not in base_avatar_data:
return
# まずHighheelを含むラベルを検索
highheel_labels = [label for label in blend_shape_labels if "highheel" in label.lower() and "off" not in label.lower()]
base_highheel_fields = [field for field in base_avatar_data["blendShapeFields"]
if "highheel" in field.get("label", "").lower() and "off" not in field.get("label", "").lower()]
# Highheelを含むラベルが無い場合は、Heelを含むラベルを検索
if not highheel_labels:
highheel_labels = [label for label in blend_shape_labels if "heel" in label.lower() and "off" not in label.lower()]
base_highheel_fields = [field for field in base_avatar_data["blendShapeFields"]
if "heel" in field.get("label", "").lower() and "off" not in field.get("label", "").lower()]
# 条件:blend_shape_labelsに該当ラベルが一つだけ、かつbase_avatar_dataに該当フィールドが一つだけ
if len(highheel_labels) != 1 or len(base_highheel_fields) != 1:
return
# 唯一のラベルとフィールドを取得
target_label = highheel_labels[0]
base_field = base_highheel_fields[0]
base_label = base_field.get("label", "")
# 各メッシュのシェイプキーをチェック
for obj in clothing_meshes:
if not obj.data.shape_keys:
continue
# base_avatar_dataのラベルでシェイプキーを探す
if base_label in obj.data.shape_keys.key_blocks:
shape_key = obj.data.shape_keys.key_blocks[base_label]
shape_key.value = 1.0
print(f"Set shape key '{base_label}' value to 1.0 on {obj.name}")
def matrix_to_list(matrix):
"""
Matrix型をリストに変換する (JSON保存用)
Parameters:
matrix: mathutils.Matrix - 変換する行列
Returns:
list: 行列の各要素をリストとして表現
"""
return [list(row) for row in matrix]
def export_armature_bone_data_to_json(armature_obj: bpy.types.Object, output_path: str = None) -> dict:
"""
指定されたArmatureに含まれるすべてのボーンのワールド座標系での位置、回転、スケールをJSON形式で出力する
Parameters:
armature_obj: Armatureオブジェクト
output_path: JSONファイルの出力パス(指定しない場合はファイル出力しない)
Returns:
dict: ボーン情報の辞書
"""
import json
if not armature_obj or armature_obj.type != 'ARMATURE':
print(f"Error: 無効なArmatureオブジェクトです: {armature_obj}")
return {}
bone_data = {
"armature_name": armature_obj.name,
"export_timestamp": str(bpy.context.scene.frame_current),
"bones": {}
}
# アーマチュアをアクティブにしてPoseモードに設定
original_active = bpy.context.view_layer.objects.active
original_mode = bpy.context.mode
bone_convert_matrix = Matrix(((1.0, 0.0, 0.0, 0.0),
(0.0, 0.0, 1.0, 0.0),
(0.0, -1.0, 0.0, 0.0),
(0.0, 0.0, 0.0, 1.0)))
try:
bpy.context.view_layer.objects.active = armature_obj
bpy.ops.object.mode_set(mode='POSE')
# 各ボーンの情報を取得
for pose_bone in armature_obj.pose.bones:
bone = armature_obj.data.bones[pose_bone.name]
# ワールド座標系でのマトリックス
world_matrix = armature_obj.matrix_world @ pose_bone.matrix @ bone_convert_matrix
# ルートからのパスを構築
bone_path = []
current_bone = pose_bone
while current_bone:
bone_path.insert(0, current_bone.name) # 先頭に挿入してルートから順に並べる
current_bone = current_bone.parent
bone_info = {
"matrix": matrix_to_list(world_matrix),
"parent": pose_bone.parent.name if pose_bone.parent else None,
"bone_path": bone_path,
"bone_depth": len(bone_path) - 1, # ルートを0として深度を計算
"bone_length": float(pose_bone.length)
}
bone_data["bones"][pose_bone.name] = bone_info
# JSONファイルとして出力
if output_path:
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(bone_data, f, indent=2, ensure_ascii=False)
print(f"ボーン情報をJSONファイルに出力しました: {output_path}")
print(f"Armature '{armature_obj.name}' のボーン情報を取得しました: {len(bone_data['bones'])}個のボーン")
return bone_data
except Exception as e:
print(f"ボーン情報の取得中にエラーが発生しました: {e}")
return {}
finally:
# 元の状態に戻す
if original_active:
bpy.context.view_layer.objects.active = original_active
if original_mode != 'POSE':
try:
bpy.ops.object.mode_set(mode='OBJECT')
except:
pass
def round_bone_coordinates(armature: bpy.types.Object, decimal_places: int = 6) -> None:
"""
アーマチュアのすべてのボーンのhead、tail座標およびRoll値を指定された小数点位置で四捨五入する。
Args:
armature: 対象のアーマチュアオブジェクト
decimal_places: 四捨五入する小数点以下の桁数 (デフォルト: 6)
"""
if not armature or armature.type != 'ARMATURE':
print(f"Warning: Invalid armature object for rounding bone coordinates")
return
# エディットモードに切り替え
bpy.context.view_layer.objects.active = armature
bpy.ops.object.mode_set(mode='EDIT')
try:
edit_bones = armature.data.edit_bones
rounded_count = 0
for bone in edit_bones:
# headの座標を四捨五入
bone.head.x = round(bone.head.x, decimal_places)
bone.head.y = round(bone.head.y, decimal_places)
bone.head.z = round(bone.head.z, decimal_places)
# tailの座標を四捨五入
bone.tail.x = round(bone.tail.x, decimal_places)
bone.tail.y = round(bone.tail.y, decimal_places)
bone.tail.z = round(bone.tail.z, decimal_places)
# Roll値を四捨五入
bone.roll = round(bone.roll, decimal_places - 3)
rounded_count += 1
print(f"ボーン座標の四捨五入完了: {rounded_count}個のボーン(小数点以下{decimal_places}桁)")
finally:
# 元のモードに戻す
bpy.ops.object.mode_set(mode='OBJECT')
def export_fbx(filepath: str, selected_only: bool = True) -> None:
"""Export selected objects to FBX."""
try:
bpy.ops.export_scene.fbx(
filepath=filepath,
use_selection=selected_only,
apply_scale_options='FBX_SCALE_ALL',
apply_unit_scale=True,
add_leaf_bones=False,
axis_forward='-Z', axis_up='Y'
)
except Exception as e:
raise Exception(f"Failed to export FBX: {str(e)}")
def propagate_bone_weights(mesh_obj: bpy.types.Object, temp_group_name: str = "PropagatedWeightsTemp", max_iterations: int = 500) -> Optional[str]:
"""
ボーン変形に関わるボーンウェイトを持たない頂点にウェイトを伝播させる。
Parameters:
mesh_obj: メッシュオブジェクト
max_iterations: 最大反復回数
Returns:
Optional[str]: 伝播させた頂点を記録した頂点グループの名前。伝播が不要な場合はNone
"""
# アーマチュアモディファイアからアーマチュアを取得
armature_obj = None
for modifier in mesh_obj.modifiers:
if modifier.type == 'ARMATURE':
armature_obj = modifier.object
break
if not armature_obj:
print(f"Warning: No armature modifier found in {mesh_obj.name}")
return None
# アーマチュアのすべてのボーン名を取得
deform_groups = {bone.name for bone in armature_obj.data.bones}
# BMeshを作成
bm = bmesh.new()
bm.from_mesh(mesh_obj.data)
bm.verts.ensure_lookup_table()
bm.edges.ensure_lookup_table()
# 頂点ごとのウェイト情報を取得
vertex_weights = {}
vertices_without_weights = set()
for vert in mesh_obj.data.vertices:
has_weight = False
weights = {}
for group in mesh_obj.vertex_groups:
if group.name in deform_groups:
try:
weight = 0.0
for g in vert.groups:
if g.group == group.index:
weight = g.weight
has_weight = True
break
if weight > 0:
weights[group.name] = weight
except RuntimeError:
continue
vertex_weights[vert.index] = weights
if not weights:
vertices_without_weights.add(vert.index)
# ウェイトを持たない頂点がない場合は処理を終了
if not vertices_without_weights:
return None
print(f"Found {len(vertices_without_weights)} vertices without weights in {mesh_obj.name}")
# 一時的な頂点グループを作成(既存の同名グループがあれば削除)
if temp_group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups.remove(mesh_obj.vertex_groups[temp_group_name])
temp_group = mesh_obj.vertex_groups.new(name=temp_group_name)
# 反復処理
total_propagated = 0
iteration = 0
while iteration < max_iterations and vertices_without_weights:
propagated_this_iteration = 0
remaining_vertices = set()
# 各ウェイトなし頂点について処理
for vert_idx in vertices_without_weights:
vert = bm.verts[vert_idx]
# 隣接頂点を取得
neighbors = set()
for edge in vert.link_edges:
other = edge.other_vert(vert)
if vertex_weights[other.index]:
neighbors.add(other)
if neighbors:
# 最も近い頂点を見つける
closest_vert = min(neighbors,
key=lambda v: (v.co - vert.co).length)
# ウェイトをコピー
vertex_weights[vert_idx] = vertex_weights[closest_vert.index].copy()
temp_group.add([vert_idx], 1.0, 'REPLACE') # 伝播頂点を記録
propagated_this_iteration += 1
else:
remaining_vertices.add(vert_idx)
if propagated_this_iteration == 0:
break
print(f"Iteration {iteration + 1}: Propagated weights to {propagated_this_iteration} vertices in {mesh_obj.name}")
total_propagated += propagated_this_iteration
vertices_without_weights = remaining_vertices
iteration += 1
# 残りのウェイトなし頂点に平均ウェイトを割り当て
if vertices_without_weights:
total_weights = {}
weight_count = 0
# まず平均ウェイトを計算
for vert_idx, weights in vertex_weights.items():
if weights:
weight_count += 1
for group_name, weight in weights.items():
if group_name not in total_weights:
total_weights[group_name] = 0.0
total_weights[group_name] += weight
if weight_count > 0:
average_weights = {
group_name: weight / weight_count
for group_name, weight in total_weights.items()
}
# 残りの頂点に平均ウェイトを適用
num_averaged = len(vertices_without_weights)
print(f"Applying average weights to remaining {num_averaged} vertices in {mesh_obj.name}")
for vert_idx in vertices_without_weights:
vertex_weights[vert_idx] = average_weights.copy()
temp_group.add([vert_idx], 1.0, 'REPLACE') # 伝播頂点を記録
total_propagated += num_averaged
# 新しいウェイトを適用
for vert_idx, weights in vertex_weights.items():
for group_name, weight in weights.items():
if group_name in mesh_obj.vertex_groups:
mesh_obj.vertex_groups[group_name].add([vert_idx], weight, 'REPLACE')
print(f"Total: Propagated weights to {total_propagated} vertices in {mesh_obj.name}")
bm.free()
return temp_group_name
def remove_propagated_weights(mesh_obj: bpy.types.Object, temp_group_name: str) -> None:
"""
伝播させたウェイトを削除する
Parameters:
mesh_obj: メッシュオブジェクト
temp_group_name: 伝播頂点を記録した頂点グループの名前
"""
# 一時頂点グループが存在することを確認
temp_group = mesh_obj.vertex_groups.get(temp_group_name)
if not temp_group:
return
# アーマチュアモディファイアからアーマチュアを取得
armature_obj = None
for modifier in mesh_obj.modifiers:
if modifier.type == 'ARMATURE':
armature_obj = modifier.object
break
if not armature_obj:
print(f"Warning: No armature modifier found in {mesh_obj.name}")
return
# アーマチュアのすべてのボーン名を取得
deform_groups = {bone.name for bone in armature_obj.data.bones}
# 伝播させた頂点のウェイトを削除
for vert in mesh_obj.data.vertices:
# 一時グループのウェイトを取得
weight = 0.0
for g in vert.groups:
if g.group == temp_group.index:
weight = g.weight
break
# ウェイトが0より大きい場合(伝播された頂点の場合)
if weight > 0:
for group in mesh_obj.vertex_groups:
try:
group.remove([vert.index])
except RuntimeError:
continue
# 一時頂点グループを削除
mesh_obj.vertex_groups.remove(temp_group)
def update_cloth_metadata(metadata_dict: dict, output_path: str, vertex_index_mapping: dict) -> None:
"""
ClothMetadataの頂点位置を更新し、指定されたパスに保存する
Parameters:
metadata_dict: 元のClothMetadataの辞書
output_path: 保存先のパス
vertex_index_mapping: Unity頂点インデックスからBlender頂点インデックスへのマッピング
"""
# 各メッシュについて処理
for cloth_data in metadata_dict.get("clothMetadata", []):
mesh_name = cloth_data["meshName"]
mesh_obj = bpy.data.objects.get(mesh_name)
if not mesh_obj or mesh_obj.type != 'MESH':
print(f"Warning: Mesh {mesh_name} not found")
continue
# このメッシュのマッピング情報を取得
mesh_mapping = vertex_index_mapping.get(mesh_name, {})
if not mesh_mapping:
print(f"Warning: No vertex mappings found for {mesh_name}")
continue
# 評価済みメッシュを取得(モディファイア適用後の状態)
depsgraph = bpy.context.evaluated_depsgraph_get()
evaluated_obj = mesh_obj.evaluated_get(depsgraph)
evaluated_mesh = evaluated_obj.data
# vertexDataを更新
for i, data in enumerate(cloth_data.get("vertexData", [])):
# Unity頂点インデックスに対応するBlender頂点インデックスを取得
blender_vert_idx = mesh_mapping.get(i)
if blender_vert_idx is not None and blender_vert_idx < len(evaluated_mesh.vertices):
# ワールド座標を取得
world_pos = evaluated_obj.matrix_world @ evaluated_mesh.vertices[blender_vert_idx].co
# Blender座標系からUnity座標系に変換
data["position"]["x"] = -world_pos.x
data["position"]["y"] = world_pos.z
data["position"]["z"] = -world_pos.y
else:
print(f"Warning: No mapping found for Unity vertex {i} in {mesh_name}")
# 更新したデータを保存
try:
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(metadata_dict, f, indent=4)
print(f"Updated cloth metadata saved to {output_path}")
except Exception as e:
print(f"Error saving cloth metadata: {e}")
def process_single_config(args, config_pair, pair_index, total_pairs, overall_start_time):
try:
import time
start_time = time.time()
use_subdivision = not args.no_subdivision
if pair_index != 0:
use_subdivision = False
use_triangulation = not args.no_triangle
bpy.ops.object.mode_set(mode='OBJECT')
# Load base file
print(f"Status: ベースファイル読み込み中")
print(f"Progress: {(pair_index + 0.05) / total_pairs * 0.9:.3f}")
load_base_file(args.base)
base_load_time = time.time()
print(f"ベースファイル読み込み: {base_load_time - start_time:.2f}秒")
# Process base avatar
print(f"Status: ベースアバター処理中")
print(f"Progress: {(pair_index + 0.1) / total_pairs * 0.9:.3f}")
base_mesh, base_armature, base_avatar_data = process_base_avatar(
config_pair['base_fbx'],
config_pair['base_avatar_data']
)
# Process clothing avatar
print(f"Status: 衣装データ処理中")
print(f"Progress: {(pair_index + 0.15) / total_pairs * 0.9:.3f}")
clothing_meshes, clothing_armature, clothing_avatar_data = process_clothing_avatar(
config_pair['input_clothing_fbx_path'],
config_pair['clothing_avatar_data'],
config_pair['hips_position'],
config_pair['target_meshes'],
config_pair['mesh_renderers']
)
# ブレンドシェイプマッピングに基づいてシェイプキー名を変換
if config_pair.get('blend_shape_mappings'):
rename_shape_keys_from_mappings(clothing_meshes, config_pair['blend_shape_mappings'])
clothing_process_time = time.time()
print(f"衣装データ処理: {clothing_process_time - base_load_time:.2f}秒")
global _is_A_pose
if pair_index == 0:
_is_A_pose = is_A_pose(
clothing_avatar_data,
clothing_armature,
init_pose_filepath=config_pair['init_pose'],
pose_filepath=config_pair['pose_data'],
clothing_avatar_data_filepath=config_pair['clothing_avatar_data']
)
print(f"is_A_pose: {_is_A_pose}")
if _is_A_pose and base_avatar_data and base_avatar_data.get('basePoseA', None):
print(f"AポーズのためAポーズ用ベースポーズを使用")
base_avatar_data['basePose'] = base_avatar_data['basePoseA']
base_pose_filepath = base_avatar_data.get('basePose', None)
if base_pose_filepath and config_pair.get('do_not_use_base_pose', 0) == 0:
pose_dir = os.path.dirname(os.path.abspath(config_pair['base_avatar_data']))
base_pose_filepath = os.path.join(pose_dir, base_pose_filepath)
print(f"Applying target avatar base pose from {base_pose_filepath}")
add_pose_from_json(base_armature, base_pose_filepath, base_avatar_data, invert=False)
base_process_time = time.time()
print(f"ベースアバター処理: {base_process_time - clothing_process_time:.2f}秒")
print(f"Status: クロスメタデータ読み込み中")
print(f"Progress: {(pair_index + 0.2) / total_pairs * 0.9:.3f}")
cloth_metadata, vertex_index_mapping = load_cloth_metadata(args.cloth_metadata)
metadata_load_time = time.time()
print(f"クロスメタデータ読み込み: {metadata_load_time - base_process_time:.2f}秒")
# メッシュマテリアルデータを読み込み(初回のみ)
if pair_index == 0:
print(f"Status: メッシュマテリアルデータ読み込み中")
print(f"Progress: {(pair_index + 0.22) / total_pairs * 0.9:.3f}")
load_mesh_material_data(args.mesh_material_data)
material_load_time = time.time()
print(f"メッシュマテリアルデータ読み込み: {material_load_time - metadata_load_time:.2f}秒")
else:
material_load_time = metadata_load_time
# Setup weight transfer
print(f"Status: ウェイト転送セットアップ中")
print(f"Progress: {(pair_index + 0.25) / total_pairs * 0.9:.3f}")
setup_weight_transfer()
setup_time = time.time()
print(f"ウェイト転送セットアップ: {setup_time - metadata_load_time:.2f}秒")
print(f"Status: ベースアバターウェイト更新中")
print(f"Progress: {(pair_index + 0.3) / total_pairs * 0.9:.3f}")
remove_empty_vertex_groups(base_mesh)
# Apply bone name conversion if provided
if hasattr(args, 'name_conv') and args.name_conv:
try:
with open(args.name_conv, 'r', encoding='utf-8') as f:
name_conv_data = json.load(f)
apply_bone_name_conversion(clothing_armature, clothing_meshes, name_conv_data)
print(f"ボーン名前変更処理完了: {args.name_conv}")
except Exception as e:
print(f"Warning: ボーン名前変更処理でエラーが発生しました: {e}")
# Normalize clothing bone names before weight updates
normalize_clothing_bone_names(clothing_armature, clothing_avatar_data, clothing_meshes)
update_base_avatar_weights(base_mesh, clothing_armature, base_avatar_data, clothing_avatar_data, preserve_optional_humanoid_bones=True)
normalize_bone_weights(base_mesh, base_avatar_data)
base_weights_time = time.time()
print(f"ベースアバターウェイト更新: {base_weights_time - setup_time:.2f}秒")
# Templateからの変換の場合股下の頂点グループをここで作成しておく
if clothing_avatar_data.get("name", None) == "Template":
print(f"Templateからの変換 股下の頂点グループを作成")
current_active_object = bpy.context.view_layer.objects.active
template_fbx_path = clothing_avatar_data.get("defaultFBXPath", None)
clothing_avatar_data_path = config_pair['clothing_avatar_data']
# template_fbx_pathを絶対パスに変換
if template_fbx_path and not os.path.isabs(template_fbx_path):
# 相対パスの最上位ディレクトリを取得
relative_parts = template_fbx_path.split(os.sep)
if relative_parts:
top_dir = relative_parts[0]
# clothing_avatar_data_pathの末尾から該当するディレクトリを検索
clothing_path_parts = clothing_avatar_data_path.split(os.sep)
found_index = -1
# 末尾から検索
for i in range(len(clothing_path_parts) - 1, -1, -1):
if clothing_path_parts[i] == top_dir:
found_index = i
break
if found_index != -1:
# 見つかった場合、絶対パスを構築
base_path = os.sep.join(clothing_path_parts[:found_index])
template_fbx_path = os.path.join(base_path, template_fbx_path)
template_fbx_path = os.path.normpath(template_fbx_path)
print(f"template_fbx_path: {template_fbx_path}")
import_base_fbx(template_fbx_path)
template_obj = bpy.data.objects.get(clothing_avatar_data.get("meshName", None))
template_armature = None
for modifier in template_obj.modifiers:
if modifier.type == 'ARMATURE':
template_armature = modifier.object
break
if template_armature is None:
print(f"Warning: Armatureモディファイアが見つかりません")
return
# select_vertices_by_conditions(template_obj, "MF_crotch", clothing_avatar_data, radius=0.075, max_angle_degrees=45.0)
# for obj in clothing_meshes:
# find_vertices_near_faces(template_obj, obj, "MF_crotch", 0.01)
crotch_vertex_group_filepath = os.path.join(os.path.dirname(template_fbx_path), "vertex_group_weights_crotch.json")
crotch_group_name = load_vertex_group(template_obj, crotch_vertex_group_filepath)
if crotch_group_name:
# LeftUpperLegとRightUpperLegボーンにY軸回転を適用
print(" LeftUpperLegとRightUpperLegボーンにY軸回転を適用")
bpy.context.view_layer.objects.active = template_armature
bpy.ops.object.mode_set(mode='POSE')
# humanoidBonesからLeftUpperLegとRightUpperLegのboneNameを取得
left_upper_leg_bone = None
right_upper_leg_bone = None
for bone_map in clothing_avatar_data.get("humanoidBones", []):
if bone_map.get("humanoidBoneName") == "LeftUpperLeg":
left_upper_leg_bone = bone_map.get("boneName")
elif bone_map.get("humanoidBoneName") == "RightUpperLeg":
right_upper_leg_bone = bone_map.get("boneName")
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_leg_bone and left_upper_leg_bone in template_armature.pose.bones:
bone = template_armature.pose.bones[left_upper_leg_bone]
current_world_matrix = template_armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = template_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-40), 4, 'Y')
bone.matrix = template_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_leg_bone and right_upper_leg_bone in template_armature.pose.bones:
bone = template_armature.pose.bones[right_upper_leg_bone]
current_world_matrix = template_armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = template_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(40), 4, 'Y')
bone.matrix = template_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
if left_upper_leg_bone and left_upper_leg_bone in clothing_armature.pose.bones:
bone = clothing_armature.pose.bones[left_upper_leg_bone]
current_world_matrix = clothing_armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = clothing_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-40), 4, 'Y')
bone.matrix = clothing_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
if right_upper_leg_bone and right_upper_leg_bone in clothing_armature.pose.bones:
bone = clothing_armature.pose.bones[right_upper_leg_bone]
current_world_matrix = clothing_armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = clothing_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(40), 4, 'Y')
bone.matrix = clothing_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.ops.object.mode_set(mode='OBJECT')
bpy.context.view_layer.update()
for obj in clothing_meshes:
#transfer_weights_from_nearest_vertex(template_obj, obj, crotch_group_name)
find_vertices_near_faces(template_obj, obj, crotch_group_name, 0.01, use_all_faces=True, smooth_repeat=3)
# LeftUpperLegボーンに-45度のY軸回転を適用
if left_upper_leg_bone and left_upper_leg_bone in template_armature.pose.bones:
bone = template_armature.pose.bones[left_upper_leg_bone]
current_world_matrix = template_armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = template_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(40), 4, 'Y')
bone.matrix = template_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
# RightUpperLegボーンに45度のY軸回転を適用
if right_upper_leg_bone and right_upper_leg_bone in template_armature.pose.bones:
bone = template_armature.pose.bones[right_upper_leg_bone]
current_world_matrix = template_armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = template_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-40), 4, 'Y')
bone.matrix = template_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
if left_upper_leg_bone and left_upper_leg_bone in clothing_armature.pose.bones:
bone = clothing_armature.pose.bones[left_upper_leg_bone]
current_world_matrix = clothing_armature.matrix_world @ bone.matrix
# グローバル座標系での-45度Y軸回転を適用
head_world_transformed = clothing_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(40), 4, 'Y')
bone.matrix = clothing_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
if right_upper_leg_bone and right_upper_leg_bone in clothing_armature.pose.bones:
bone = clothing_armature.pose.bones[right_upper_leg_bone]
current_world_matrix = clothing_armature.matrix_world @ bone.matrix
# グローバル座標系での45度Y軸回転を適用
head_world_transformed = clothing_armature.matrix_world @ bone.head
offset_matrix = mathutils.Matrix.Translation(head_world_transformed * -1.0)
rotation_matrix = mathutils.Matrix.Rotation(math.radians(-40), 4, 'Y')
bone.matrix = clothing_armature.matrix_world.inverted() @ offset_matrix.inverted() @ rotation_matrix @ offset_matrix @ current_world_matrix
bpy.context.view_layer.update()
blur_vertex_group_filepath = os.path.join(os.path.dirname(template_fbx_path), "vertex_group_weights_blur.json")
blur_group_name = load_vertex_group(template_obj, blur_vertex_group_filepath)
if blur_group_name:
for obj in clothing_meshes:
transfer_weights_from_nearest_vertex(template_obj, obj, blur_group_name)
inpaint_vertex_group_filepath = os.path.join(os.path.dirname(template_fbx_path), "vertex_group_weights_inpaint.json")
inpaint_group_name = load_vertex_group(template_obj, inpaint_vertex_group_filepath)
if inpaint_group_name:
for obj in clothing_meshes:
transfer_weights_from_nearest_vertex(template_obj, obj, inpaint_group_name)
bpy.data.objects.remove(bpy.data.objects["Body.Template"], do_unlink=True)
bpy.data.objects.remove(bpy.data.objects["Body.Template.Eyes"], do_unlink=True)
bpy.data.objects.remove(bpy.data.objects["Body.Template.Head"], do_unlink=True)
bpy.data.objects.remove(bpy.data.objects["Armature.Template"], do_unlink=True)
print(f"Templateからの変換 股下の頂点グループ作成完了")
bpy.context.view_layer.objects.active = current_active_object
# Apply BlendShape Deformation Fields before pose application
print(f"Status: BlendShape用 Deformation Field適用中")
print(f"Progress: {(pair_index + 0.33) / total_pairs * 0.9:.3f}")
blend_shape_labels = config_pair['blend_shapes'].split(',') if config_pair['blend_shapes'] else None
if blend_shape_labels:
for obj in clothing_meshes:
reset_shape_keys(obj)
remove_empty_vertex_groups(obj)
normalize_vertex_weights(obj)
apply_blendshape_deformation_fields(obj, config_pair['field_data'], blend_shape_labels, clothing_avatar_data, config_pair['blend_shape_values'])
blendshape_time = time.time()
print(f"BlendShape用 Deformation Field適用: {blendshape_time - base_weights_time:.2f}秒")
# Apply pose from JSON
print(f"Status: ポーズ適用中")
print(f"Progress: {(pair_index + 0.35) / total_pairs * 0.9:.3f}")
add_clothing_pose_from_json(clothing_armature, config_pair['pose_data'], config_pair['init_pose'], config_pair['clothing_avatar_data'], config_pair['base_avatar_data'])
pose_time = time.time()
print(f"ポーズ適用: {pose_time - blendshape_time:.2f}秒")
print(f"Status: 重複頂点属性設定中")
print(f"Progress: {(pair_index + 0.4) / total_pairs * 0.9:.3f}")
create_overlapping_vertices_attributes(clothing_meshes, base_avatar_data)
vertices_attributes_time = time.time()
print(f"重複頂点属性設定: {vertices_attributes_time - pose_time:.2f}秒")
for obj in clothing_meshes:
create_hinge_bone_group(obj, clothing_armature, clothing_avatar_data)
# Process each mesh object with armature modifier
print(f"Status: メッシュ変形処理中")
print(f"Progress: {(pair_index + 0.45) / total_pairs * 0.9:.3f}")
propagated_groups_map = {} # メッシュごとの伝播記録用グループ名を記録
field_distance_groups = {} # 各メッシュのフィールド距離頂点グループを記録
cycle1_start = time.time()
for obj in clothing_meshes:
obj_start = time.time()
print("cycle1 " + obj.name)
reset_shape_keys(obj)
remove_empty_vertex_groups(obj)
normalize_vertex_weights(obj)
merge_auxiliary_to_humanoid_weights(obj, clothing_avatar_data)
# ウェイトの伝播を実行
temp_group_name = propagate_bone_weights(obj)
if temp_group_name: # 伝播が実行された場合のみ記録
propagated_groups_map[obj.name] = temp_group_name
# 微小なウェイトを除外
cleanup_weights_time_start = time.time()
for vert in obj.data.vertices:
groups_to_remove = []
for g in vert.groups:
if g.weight < 0.0005:
groups_to_remove.append(g.group)
# 微小なウェイトを持つグループからその頂点を削除
for group_idx in groups_to_remove:
try:
obj.vertex_groups[group_idx].remove([vert.index])
except RuntimeError:
continue
cleanup_weights_time = time.time() - cleanup_weights_time_start
print(f" 微小ウェイト除外: {cleanup_weights_time:.2f}秒")
create_deformation_mask(obj, clothing_avatar_data)
if pair_index == 0 and use_subdivision and obj.name not in cloth_metadata:
subdivide_long_edges(obj)
subdivide_breast_faces(obj, clothing_avatar_data)
if use_triangulation and not use_subdivision and obj.name not in cloth_metadata and pair_index == total_pairs - 1:
triangulate_mesh(obj)
# 頂点ウェイトを記録し、ボーンウェイトを統合
original_weights = save_vertex_weights(obj)
# ボーンウェイトの統合処理
process_bone_weight_consolidation(obj, clothing_avatar_data)
process_mesh_with_connected_components_inline(
obj,
config_pair['field_data'],
blend_shape_labels,
clothing_avatar_data,
base_avatar_data,
clothing_armature,
cloth_metadata,
subdivision=use_subdivision,
skip_blend_shape_generation=config_pair['skip_blend_shape_generation'],
config_data=config_pair['config_data']
)
# イテレーション終了時: 元のウェイト状態に復元
restore_vertex_weights(obj, original_weights)
if obj.data.shape_keys:
# _generatedサフィックス付きシェイプキーの処理
generated_shape_keys = []
for shape_key in obj.data.shape_keys.key_blocks:
if shape_key.name.endswith("_generated"):
generated_shape_keys.append(shape_key.name)
# _generatedシェイプキーを対応するベースシェイプキーに統合
for generated_name in generated_shape_keys:
base_name = generated_name[:-10] # "_generated"を除去
generated_key = obj.data.shape_keys.key_blocks.get(generated_name)
base_key = obj.data.shape_keys.key_blocks.get(base_name)
if generated_key and base_key:
# generatedシェイプキーの内容でベースシェイプキーを上書き
for i, point in enumerate(generated_key.data):
base_key.data[i].co = point.co
print(f"Merged {generated_name} into {base_name} for {obj.name}")
# generatedシェイプキーを削除
obj.shape_key_remove(generated_key)
print(f"Removed generated shape key: {generated_name} from {obj.name}")
print(f" {obj.name}の処理: {time.time() - obj_start:.2f}秒")
cycle1_end = time.time()
print(f"サイクル1全体: {cycle1_end - cycle1_start:.2f}秒")
for obj in clothing_meshes:
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
print(f"Shape key: {key_block.name} / {key_block.value} found on {obj.name}")
right_base_mesh, left_base_mesh = duplicate_mesh_with_partial_weights(base_mesh, base_avatar_data)
duplicate_time = time.time()
print(f"ベースメッシュ複製: {duplicate_time - cycle1_end:.2f}秒")
# まず最初に包含関係を調べる
print(f"Status: メッシュの包含関係検出中")
print(f"Progress: {(pair_index + 0.5) / total_pairs * 0.9:.3f}")
containing_objects = find_containing_objects(clothing_meshes, threshold=0.04)
print(f"Found {sum(len(contained) for contained in containing_objects.values())} objects that are contained within others")
containing_time = time.time()
print(f"包含関係検出: {containing_time - duplicate_time:.2f}秒")
# 処理済みオブジェクトを追跡(weight transfer処理のみ)
weight_transfer_processed = set()
armature_settings_dict = {}
# Cycle2の個別処理部分
print(f"Status: サイクル2前処理中")
print(f"Progress: {(pair_index + 0.55) / total_pairs * 0.9:.3f}")
cycle2_pre_start = time.time()
# アバターデータからsubHumanoidBonesとsubAuxiliaryBonesを取得し、現在のデータを上書き
# その前に変更前のデータを保存
original_humanoid_bones = None
original_auxiliary_bones = None
if base_avatar_data.get('subHumanoidBones') or base_avatar_data.get('subAuxiliaryBones'):
print("subHumanoidBonesとsubAuxiliaryBonesを適用中...")
# 元のデータをバックアップ
original_humanoid_bones = base_avatar_data.get('humanoidBones', []).copy() if base_avatar_data.get('humanoidBones') else []
original_auxiliary_bones = base_avatar_data.get('auxiliaryBones', []).copy() if base_avatar_data.get('auxiliaryBones') else []
# subHumanoidBonesで上書き
if base_avatar_data.get('subHumanoidBones'):
# 現在のhumanoidBonesから同じhumanoidBoneNameを持つものを探して上書き
sub_humanoid_bones = base_avatar_data['subHumanoidBones']
humanoid_bones = base_avatar_data.get('humanoidBones', [])
for sub_bone in sub_humanoid_bones:
sub_humanoid_name = sub_bone.get('humanoidBoneName')
if sub_humanoid_name:
# 既存のhumanoidBonesから同じhumanoidBoneNameを持つものを探す
for i, existing_bone in enumerate(humanoid_bones):
if existing_bone.get('humanoidBoneName') == sub_humanoid_name:
humanoid_bones[i] = sub_bone.copy()
break
else:
# 見つからない場合は追加
humanoid_bones.append(sub_bone.copy())
# subAuxiliaryBonesで上書き
if base_avatar_data.get('subAuxiliaryBones'):
# 現在のauxiliaryBonesから同じhumanoidBoneNameを持つものを探して上書き
sub_auxiliary_bones = base_avatar_data['subAuxiliaryBones']
auxiliary_bones = base_avatar_data.get('auxiliaryBones', [])
for sub_aux in sub_auxiliary_bones:
sub_humanoid_name = sub_aux.get('humanoidBoneName')
if sub_humanoid_name:
# 既存のauxiliaryBonesから同じhumanoidBoneNameを持つものを探す
for i, existing_aux in enumerate(auxiliary_bones):
if existing_aux.get('humanoidBoneName') == sub_humanoid_name:
auxiliary_bones[i] = sub_aux.copy()
break
else:
# 見つからない場合は追加
auxiliary_bones.append(sub_aux.copy())
print("subHumanoidBonesとsubAuxiliaryBonesの適用完了")
# if clothing_avatar_data.get("name", None) != "Template":
# select_vertices_by_conditions(base_mesh, "MF_crotch", base_avatar_data, radius=0.075, max_angle_degrees=45.0)
if base_avatar_data.get("name", None) == "Template" and _is_A_pose and base_avatar_data.get('basePoseA', None):
armpit_vertex_group_filepath2 = os.path.join(os.path.dirname(config_pair['base_fbx']), "vertex_group_weights_armpit.json")
armpit_group_name2 = load_vertex_group(base_mesh, armpit_vertex_group_filepath2)
if armpit_group_name2:
for obj in clothing_meshes:
find_vertices_near_faces(base_mesh, obj, armpit_group_name2, 0.1, 45.0)
if base_avatar_data.get("name", None) == "Template":
crotch_vertex_group_filepath2 = os.path.join(os.path.dirname(config_pair['base_fbx']), "vertex_group_weights_crotch2.json")
crotch_group_name2 = load_vertex_group(base_mesh, crotch_vertex_group_filepath2)
if crotch_group_name2:
for obj in clothing_meshes:
# transfer_weights_from_nearest_vertex(base_mesh, obj, crotch_group_name2)
find_vertices_near_faces(base_mesh, obj, crotch_group_name2, 0.01, smooth_repeat=3)
blur_vertex_group_filepath2 = os.path.join(os.path.dirname(config_pair['base_fbx']), "vertex_group_weights_blur.json")
blur_group_name2 = load_vertex_group(base_mesh, blur_vertex_group_filepath2)
if blur_group_name2:
for obj in clothing_meshes:
transfer_weights_from_nearest_vertex(base_mesh, obj, blur_group_name2)
inpaint_vertex_group_filepath2 = os.path.join(os.path.dirname(config_pair['base_fbx']), "vertex_group_weights_inpaint.json")
inpaint_group_name2 = load_vertex_group(base_mesh, inpaint_vertex_group_filepath2)
if inpaint_group_name2 :
for obj in clothing_meshes:
transfer_weights_from_nearest_vertex(base_mesh, obj, inpaint_group_name2)
for obj in clothing_meshes:
obj_start = time.time()
print("cycle2 (pre-weight transfer) " + obj.name)
# Store armature modifier settings
armature_settings = store_armature_modifier_settings(obj)
armature_settings_dict[obj] = armature_settings
# Apply modifiers and process humanoid vertex groups (これらは個別に適用)
#apply_modifiers_keep_shapekeys_with_temp(obj)
generate_temp_shapekeys_for_weight_transfer(obj, clothing_armature, clothing_avatar_data, _is_A_pose)
process_missing_bone_weights(obj, base_armature, clothing_avatar_data, base_avatar_data, preserve_optional_humanoid_bones=False)
process_humanoid_vertex_groups(obj, clothing_armature, base_avatar_data, clothing_avatar_data)
# if clothing_avatar_data.get("name", None) != "Template":
# find_vertices_near_faces(base_mesh, obj, "MF_crotch", 0.01)
restore_armature_modifier(obj, armature_settings_dict[obj])
set_armature_modifier_visibility(obj, False, False)
set_armature_modifier_target_armature(obj, base_armature)
print(f" {obj.name}の前処理: {time.time() - obj_start:.2f}秒")
cycle2_pre_end = time.time()
print(f"サイクル2前処理全体: {cycle2_pre_end - cycle2_pre_start:.2f}秒")
# Weight transfer処理(包含関係を考慮)
print(f"Status: サイクル2ウェイト転送中")
print(f"Progress: {(pair_index + 0.6) / total_pairs * 0.9:.3f}")
weight_transfer_start = time.time()
for obj in clothing_meshes:
if obj in weight_transfer_processed:
continue
obj_start = time.time()
# このオブジェクトが他のオブジェクトを包含している場合
if obj in containing_objects and containing_objects[obj]:
contained_objects = containing_objects[obj]
print(f"{obj.name} contains {contained_objects} other objects within distance 0.02 - applying joint weight transfer")
# 一時的に結合してweight transfer処理のみを適用
temporarily_merge_for_weight_transfer(
obj,
contained_objects,
base_armature,
base_avatar_data,
clothing_avatar_data,
config_pair['field_data'],
clothing_armature,
config_pair.get('next_blendshape_settings', []),
cloth_metadata
)
# 処理済みとしてマーク
weight_transfer_processed.add(obj)
weight_transfer_processed.update(contained_objects)
print(f" {obj.name}の包含ウェイト転送: {time.time() - obj_start:.2f}秒")
for obj in clothing_meshes:
if obj in weight_transfer_processed:
continue
# 包含関係がない場合は通常のweight transfer処理
obj_start = time.time()
print(f"Applying individual weight transfer to {obj.name}")
# Weight transfer
# process_weight_transfer(obj, base_armature, base_avatar_data, config_pair['field_data'], clothing_armature, cloth_metadata)
process_weight_transfer_with_component_normalization(obj, base_armature, base_avatar_data, clothing_avatar_data, config_pair['field_data'], clothing_armature, config_pair.get('next_blendshape_settings', []), cloth_metadata)
# 処理済みとしてマーク
weight_transfer_processed.add(obj)
print(f" {obj.name}の個別ウェイト転送: {time.time() - obj_start:.2f}秒")
# 重なっている頂点のウェイトを揃える処理
normalize_overlapping_vertices_weights(clothing_meshes, base_avatar_data)
weight_transfer_end = time.time()
print(f"ウェイト転送処理全体: {weight_transfer_end - weight_transfer_start:.2f}秒")
print(f"Status: サイクル2後処理中")
print(f"Progress: {(pair_index + 0.65) / total_pairs * 0.9:.3f}")
cycle2_post_start = time.time()
for obj in clothing_meshes:
obj_start = time.time()
print("cycle2 (post-weight transfer) " + obj.name)
set_armature_modifier_visibility(obj, True, True)
set_armature_modifier_target_armature(obj, clothing_armature)
print(f" {obj.name}の後処理: {time.time() - obj_start:.2f}秒")
cycle2_post_end = time.time()
print(f"サイクル2後処理全体: {cycle2_post_end - cycle2_post_start:.2f}秒")
print(f"Status: ポーズ適用中")
print(f"Progress: {(pair_index + 0.7) / total_pairs * 0.9:.3f}")
apply_pose_as_rest(clothing_armature)
pose_rest_time = time.time()
print(f"ポーズをレストポーズとして適用: {pose_rest_time - cycle2_post_end:.2f}秒")
print(f"Status: ボーンフィールドデルタ適用中")
print(f"Progress: {(pair_index + 0.75) / total_pairs * 0.9:.3f}")
apply_bone_field_delta(clothing_armature, config_pair['field_data'], clothing_avatar_data)
bone_delta_time = time.time()
print(f"ボーンフィールドデルタ適用: {bone_delta_time - pose_rest_time:.2f}秒")
print(f"Status: ポーズ適用中")
print(f"Progress: {(pair_index + 0.85) / total_pairs * 0.9:.3f}")
apply_pose_as_rest(clothing_armature)
second_pose_rest_time = time.time()
print(f"2回目のポーズをレストポーズとして適用: {second_pose_rest_time - bone_delta_time:.2f}秒")
print(f"Status: すべての変換を適用中")
print(f"Progress: {(pair_index + 0.9) / total_pairs * 0.9:.3f}")
apply_all_transforms()
transforms_time = time.time()
print(f"すべての変換を適用: {transforms_time - second_pose_rest_time:.2f}秒")
# 伝播させたウェイトを削除
print(f"Status: 伝播ウェイト削除中")
print(f"Progress: {(pair_index + 0.95) / total_pairs * 0.9:.3f}")
propagated_start = time.time()
for obj in clothing_meshes:
if obj.name in propagated_groups_map:
remove_propagated_weights(obj, propagated_groups_map[obj.name])
propagated_end = time.time()
print(f"伝播ウェイト削除: {propagated_end - propagated_start:.2f}秒")
# subHumanoidBonesとsubAuxiliaryBonesを適用していた場合、元のデータを復元
if original_humanoid_bones is not None or original_auxiliary_bones is not None:
print("元のhumanoidBonesとauxiliaryBonesを復元中...")
if original_humanoid_bones is not None:
base_avatar_data['humanoidBones'] = original_humanoid_bones
if original_auxiliary_bones is not None:
base_avatar_data['auxiliaryBones'] = original_auxiliary_bones
print("元のボーンデータの復元完了")
print(f"Status: ヒューマノイドボーン置換中")
print(f"Progress: {(pair_index + 0.95) / total_pairs * 0.9:.3f}")
base_pose_filepath = None
if config_pair.get('do_not_use_base_pose', 0) == 0:
base_pose_filepath = base_avatar_data.get('basePose', None)
if base_pose_filepath:
pose_dir = os.path.dirname(os.path.abspath(config_pair['base_avatar_data']))
base_pose_filepath = os.path.join(pose_dir, base_pose_filepath)
if pair_index == 0:
replace_humanoid_bones(base_armature, clothing_armature, base_avatar_data, clothing_avatar_data, True, base_pose_filepath, clothing_meshes, False)
else:
replace_humanoid_bones(base_armature, clothing_armature, base_avatar_data, clothing_avatar_data, False, base_pose_filepath, clothing_meshes, True)
bones_replace_time = time.time()
print(f"ヒューマノイドボーン置換: {bones_replace_time - propagated_end:.2f}秒")
# ConfigファイルのclothingBlendShapeSettingsに基づくblendshape設定
print(f"Status: ブレンドシェイプ設定中")
print(f"Progress: {(pair_index + 0.96) / total_pairs * 0.9:.3f}")
blendshape_start = time.time()
if "clothingBlendShapeSettings" in config_pair['config_data']:
blend_shape_settings = config_pair['config_data']["clothingBlendShapeSettings"]
for setting in blend_shape_settings:
label = setting.get("label")
if label in blend_shape_labels:
blendshapes = setting.get("blendshapes", [])
for bs in blendshapes:
shape_key_name = bs.get("name")
value = bs.get("value", 0)
for obj in clothing_meshes:
if obj.data.shape_keys and shape_key_name in obj.data.shape_keys.key_blocks:
obj.data.shape_keys.key_blocks[shape_key_name].value = value / 100.0
print(f"Set blendshape '{shape_key_name}' on {obj.name} to {value/100.0}")
blendshape_end = time.time()
print(f"ブレンドシェイプ設定: {blendshape_end - blendshape_start:.2f}秒")
print(f"Status: クロスメタデータ更新中")
print(f"Progress: {(pair_index + 0.97) / total_pairs * 0.9:.3f}")
metadata_update_start = time.time()
if args.cloth_metadata and os.path.exists(args.cloth_metadata):
try:
# ClothMetadataを読み込む
with open(args.cloth_metadata, 'r', encoding='utf-8') as f:
metadata_dict = json.load(f)
# ClothMetadataを更新して保存
update_cloth_metadata(metadata_dict, args.cloth_metadata, vertex_index_mapping)
except Exception as e:
print(f"Error processing cloth metadata: {e}")
import traceback
traceback.print_exc()
metadata_update_end = time.time()
print(f"クロスメタデータ更新: {metadata_update_end - metadata_update_start:.2f}秒")
# FBXエクスポート前処理
print(f"Status: FBXエクスポート前処理中")
print(f"Progress: {(pair_index + 0.975) / total_pairs * 0.9:.3f}")
preprocess_start = time.time()
# blend_shape_labelsの取得
blend_shape_labels = []
if args.blend_shapes:
blend_shape_labels = [label for label in args.blend_shapes.split(',')]
for obj in clothing_meshes:
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
print(f"Shape key: {key_block.name} / {key_block.value} found on {obj.name}")
# apply_blendshape_deformation_fieldsで作成されたシェイプキーを削除
merge_and_clean_generated_shapekeys(clothing_meshes, blend_shape_labels)
if clothing_avatar_data.get("name", None) == "Template":
import re
pattern = re.compile(r'___\d+$')
for obj in clothing_meshes:
if obj.data.shape_keys:
keys_to_remove = []
for key_block in obj.data.shape_keys.key_blocks:
if pattern.search(key_block.name):
keys_to_remove.append(key_block.name)
for key_name in keys_to_remove:
key_block = obj.data.shape_keys.key_blocks.get(key_name)
if key_block:
obj.shape_key_remove(key_block)
print(f"Removed shape key: {key_name} from {obj.name}")
if pair_index > 0:
bpy.ops.object.mode_set(mode='OBJECT')
clothing_blend_shape_labels = []
for blend_shape_field in clothing_avatar_data['blendShapeFields']:
clothing_blend_shape_labels.append(blend_shape_field['label'])
base_blend_shape_labels = []
for blend_shape_field in base_avatar_data['blendShapeFields']:
base_blend_shape_labels.append(blend_shape_field['label'])
for obj in clothing_meshes:
if obj.data.shape_keys:
for key_block in obj.data.shape_keys.key_blocks:
if key_block.name in clothing_blend_shape_labels and key_block.name not in base_blend_shape_labels:
prev_shape_key = obj.data.shape_keys.key_blocks.get(key_block.name)
obj.shape_key_remove(prev_shape_key)
print(f"Removed shape key: {key_block.name} from {obj.name}")
# Highheelシェイプキーの値を1に設定
set_highheel_shapekey_values(clothing_meshes, blend_shape_labels, base_avatar_data)
# Export armature bone data to JSON before FBX export
# print(f"Status: ボーン情報JSON出力中")
# json_output_path = args.output.rsplit('.', 1)[0] + '_bone_data.json'
# bpy.context.view_layer.update()
# export_armature_bone_data_to_json(clothing_armature, json_output_path)
preprocess_end = time.time()
print(f"FBXエクスポート前処理: {preprocess_end - preprocess_start:.2f}秒")
# Select only imported objects for export
bpy.ops.object.select_all(action='DESELECT')
for obj in bpy.data.objects:
if obj.name not in ["Body.BaseAvatar", "Armature.BaseAvatar", "Body.BaseAvatar.RightOnly", "Body.BaseAvatar.LeftOnly"]:
obj.select_set(True)
round_bone_coordinates(clothing_armature, decimal_places=6)
# Export as FBX
print(f"Status: FBXエクスポート中")
print(f"Progress: {(pair_index + 0.98) / total_pairs * 0.9:.3f}")
export_start = time.time()
export_fbx(args.output)
export_end = time.time()
print(f"FBXエクスポート: {export_end - export_start:.2f}秒")
# Save the current scene
# if pair_index == 0:
# save_start = time.time()
# output_blend = args.output.rsplit('.', 1)[0] + '.blend'
# bpy.ops.wm.save_as_mainfile(filepath=output_blend)
# save_end = time.time()
# print(f"Blendファイル保存: {save_end - save_start:.2f}秒")
total_time = time.time() - start_time
print(f"Progress: {(pair_index + 1.0) / total_pairs * 0.9:.3f}")
print(f"処理完了: 合計 {total_time:.2f}秒")
return True
except Exception as e:
import traceback
print("============= Error Details =============")
print(f"Error message: {str(e)}")
print("\n============= Full Stack Trace =============")
print(traceback.format_exc())
print("==========================================")
output_blend = args.output.rsplit('.', 1)[0] + '.blend'
bpy.ops.wm.save_as_mainfile(filepath=output_blend)
return False
def main():
try:
import time
start_time = time.time()
sys.stdout.reconfigure(line_buffering=True)
print(f"Status: アドオン有効化中")
print(f"Progress: 0.01")
bpy.ops.preferences.addon_enable(module='robust-weight-transfer')
print(f"Addon enabled: {time.time() - start_time:.2f}秒")
# Parse command line arguments
print(f"Status: 引数解析中")
print(f"Progress: 0.02")
args = parse_args()
parse_time = time.time()
print(f"引数解析: {parse_time - start_time:.2f}秒")
# Process each config pair
total_pairs = len(args.config_pairs)
successful_pairs = 0
for pair_index, config_pair in enumerate(args.config_pairs):
try:
print(f"\n{'='*60}")
print(f"処理開始: ペア {pair_index + 1}/{total_pairs}")
print(f"Base FBX: {config_pair['base_fbx']}")
print(f"Config: {config_pair['config_path']}")
print(f"{'='*60}")
# Create output filename with index for multiple pairs
# if total_pairs > 1:
# base_output = args.output.rsplit('.', 1)[0]
# extension = args.output.rsplit('.', 1)[1] if '.' in args.output else 'fbx'
# output_file = f"{base_output}_{pair_index + 1:03d}.{extension}"
# else:
# output_file = args.output
output_file = args.output
# Create a copy of args with updated output path
pair_args = argparse.Namespace(**vars(args))
pair_args.output = output_file
success = process_single_config(pair_args, config_pair, pair_index, total_pairs, start_time)
if success:
successful_pairs += 1
print(f"✓ ペア {pair_index + 1} 正常完了: {output_file}")
else:
print(f"✗ ペア {pair_index + 1} 処理失敗")
break
except Exception as e:
import traceback
print(f"✗ ペア {pair_index + 1} でエラーが発生しました:")
print("============= Error Details =============")
print(f"Error message: {str(e)}")
print("\n============= Full Stack Trace =============")
print(traceback.format_exc())
print("==========================================")
# Save error scene
try:
error_output = args.output.rsplit('.', 1)[0] + f'_error_{pair_index + 1:03d}.blend'
bpy.ops.wm.save_as_mainfile(filepath=error_output)
print(f"エラー時のシーンを保存: {error_output}")
except:
pass
total_time = time.time() - start_time
print(f"Progress: 1.00")
print(f"\n{'='*60}")
print(f"全体処理完了")
print(f"成功: {successful_pairs}/{total_pairs} ペア")
print(f"合計時間: {total_time:.2f}秒")
print(f"{'='*60}")
return successful_pairs == total_pairs
except Exception as e:
import traceback
print("============= Fatal Error =============")
print(f"Error message: {str(e)}")
print("\n============= Full Stack Trace =============")
print(traceback.format_exc())
print("=====================================")
return False
if __name__ == "__main__":
main()
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment