目的
データフレームを AND OR NOT "Key Phrase" () の5つのオペランドで操作したい!
問題点
通常の方法だと ( ) の順番を考慮してデータフレームを操作することが難しい.
例えば,(a AND b) OR NOT (c AND d)と計算する場合,(a AND b) -> (c AND d) -> NOT (c AND d) -> (a AND b) OR NOT (c AND d) のような計算順が必要となり,ChatGPTでも常に正確に演算することは難しくなります.
手法(逆ポーランド記法)
逆ポーランド記法とは?
逆ポーランド記法(ぎゃくポーランドきほう、英語: Reverse Polish Notation, RPN)は、数式やプログラムの記法の一種。演算子を被演算子の後にすることから、後置記法 (Postfix Notation) とも言う。
その他の記法として、演算子を被演算子の中間に記述する中置記法、前に記述する前置記法(ポーランド記法)がある。 名称の由来は、演算子と被演算子の順序がポーランド記法の逆になっていることによる。
in wikipedia
AtCoderやPaizaなどでよく出る問題ですね.
逆ポーランド記法とオペランドの関係
実は逆ポーランド記法を使ってカギカッコを正確に外すことができます.
カギカッコを外した状態で,特定の単語を含む列をデータフレームから抜き出して,Stack に積んでいくことで,AND, OR, NOT の演算が可能になります.
詳しくはここらへんの記事を参考にしていただけると.
本記事のメインは実際のコードなのでそちらを重視しています.
実際のコード
逆ポーランド記法のトークンを評価する関数
def evaluate_postfix(postfix_tokens, df, key):
stack = []
original_df = df.copy()
for token in postfix_tokens:
if token == 'AND':
operand2 = stack.pop()
operand1 = stack.pop()
stack.append(and_operation(original_df, operand1, operand2))
elif token == 'OR':
operand2 = stack.pop()
operand1 = stack.pop()
stack.append(or_operation(original_df, operand1, operand2))
elif token == 'NOT':
operand = stack.pop()
stack.append(not_operation(original_df, operand))
else:
stack.append(token2df(df, token, key))
return stack.pop()
def infix_to_postfix(expression):
# 優先順位を定義
precedence = {'NOT': 3, 'AND': 2, 'OR': 1, '(': 0}
operators = set(['AND', 'OR', 'NOT'])
postfix = []
stack = []
# トークンに分割("" で囲まれたキーフレーズも含む)
tokens = re.findall(r'NOT|AND|OR|\(|\)|"[^"]+"|[^\s()]+', expression)
# AND を自動的に追加する処理
new_tokens = []
for i in range(len(tokens)):
new_tokens.append(tokens[i])
if i < len(tokens) - 1:
if (tokens[i] not in operators and tokens[i] != '(' and tokens[i] != ')'
and tokens[i+1] not in operators and tokens[i+1] != '(' and tokens[i+1] != ')'):
new_tokens.append('AND')
tokens = new_tokens
for token in tokens:
if token == '(':
stack.append(token)
elif token == ')':
top_token = stack.pop()
while top_token != '(':
postfix.append(top_token)
top_token = stack.pop()
elif token in operators:
while (stack and precedence[stack[-1]] >= precedence[token]):
postfix.append(stack.pop())
stack.append(token)
else:
postfix.append(token)
while stack:
postfix.append(stack.pop())
return postfix
def token2df(df, token, key):
token = token.strip().replace("\"", "")
lower_token = token.lower()
return df[df[key].apply(lambda x: lower_token in x.lower())]
def and_operation(df, left_df, right_df):
intersection_index = left_df.index.intersection(right_df.index).unique()
return df.loc[intersection_index]
def or_operation(df, left_df, right_df):
union_index = left_df.index.union(right_df.index).unique()
return df.loc[union_index]
def not_operation(df, operand_df):
difference_index = df.index.difference(operand_df.index).unique()
return df.loc[difference_index]
コード解説
-
infix_to_postfix が入力されたデータを逆ポーランド記法にデコードする関数になります.
-
evaluate_postfix が逆ポーランド記法で解析したトークンを評価してデータフレームを操作するコードになります.
-
token2dfが与えられた単語またはキーフレーズ(""で囲まれた一連の単語)を含むデータフレームを抽出するコードになります.
-
and_operation, or_operation, not_operationがそれぞれ,AND,OR,NOT二対応してデータフレームを操作するコードになります.
補足としてキーフレーズに対応可能にするため,正規表現を使って "" の部分を抜き出しています.
テストコード
pd.set_option("display.max_colwidth", 1000)
def test():
df = pd.read_csv("motion_generation.csv", encoding="utf-8")
key = "Title"
queries = [
"generation",
"NOT generation",
"generation AND motion",
"generation OR motion",
"\"motion generation\"",
"NOT ( - generation)",
"NOT (generation AND motion)",
"NOT (generation OR motion)",
"(generation OR motion)",
"(text OR diffuse)",
"(text OR diffuse) AND (motion OR generation)"
]
results = {}
for query in queries:
postfix = infix_to_postfix(query)
result_df = evaluate_postfix(postfix, df, key)
results[query] = (postfix, result_df)
# 各クエリに対する結果を表示
for query, result_df in results.items():
print(f"Query: {query}")
print(f"Postfix: {result_df[0]}")
print(result_df[1]['Title'])
print("\n---\n")
テスト内容
テスト内容は,最も簡単な例として,AND, OR, NOT の演算を行うコードのテストをしています.
複雑な例として,"キーフレーズ"を含むコード,()を含むコードをテストしています.
定性的な評価ですが一応うまくいっています.
制限としては,データフレームの抽出時に単語で認識しているわけではなく小文字にしたときの文字列チェックにしています.そのため,例えば "AI" を文字列に含めると,Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model のように "AI" が一部含まれるものすべてが取り出されます.
そのため,ここらへんは各自で改良していただければ幸いです.
テスト結果
Python 3.9.0 (default, Nov 15 2020, 08:30:55) [MSC v.1916 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.15.0 -- An enhanced Interactive Python. Type '?' for help.
PyDev console: using IPython 8.15.0
Python 3.9.0 (default, Nov 15 2020, 08:30:55) [MSC v.1916 64 bit (AMD64)] on win32
runfile('C:\\Users\\Atsuya\\PycharmProjects\\S2QA\\Enlighten-Dummy-Server\\test_scripts\\dataframe_operation_test.py', wdir='C:\\Users\\Atsuya\\PycharmProjects\\S2QA\\Enlighten-Dummy-Server\\test_scripts')
Query: generation
Postfix: ['generation']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
95 Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 98, dtype: object
---
Query: NOT generation
Postfix: ['generation', 'NOT']
87 Generating Human Motion from Textual Descriptions with Discrete Representations
Name: Title, dtype: object
---
Query: generation AND motion
Postfix: ['generation', 'motion', 'AND']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
93 Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 97, dtype: object
---
Query: generation OR motion
Postfix: ['generation', 'motion', 'OR']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
95 Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 99, dtype: object
---
Query: "motion generation"
Postfix: ['"motion generation"']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
92 Multiple task optimization with a mixture of controllers for motion generation
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 90, dtype: object
---
Query: NOT ( - generation)
Postfix: ['generation', '-', 'NOT']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
95 Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 98, dtype: object
---
Query: NOT (generation AND motion)
Postfix: ['generation', 'motion', 'AND', 'NOT']
87 Generating Human Motion from Textual Descriptions with Discrete Representations
95 Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
Name: Title, dtype: object
---
Query: NOT (generation OR motion)
Postfix: ['generation', 'motion', 'OR', 'NOT']
Series([], Name: Title, dtype: object)
---
Query: (generation OR motion)
Postfix: ['generation', 'motion', 'OR']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
1 OmniControl: Control Any Joint at Any Time for Human Motion Generation
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
4 Priority-Centric Human Motion Generation in Discrete Latent Space
...
94 Hierarchical quadratic programming: Fast online humanoid-robot motion generation
95 Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
96 Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
97 Head Motion Generation with Synthetic Speech: A Data Driven Approach
98 Motion generation in android robots during laughing speech
Name: Title, Length: 99, dtype: object
---
Query: (text OR diffuse)
Postfix: ['text', 'diffuse', 'OR']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
5 HumanTOMATO: Text-aligned Whole-body Motion Generation
7 Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation
19 Plan, Posture and Go: Towards Open-World Text-to-Motion Generation
24 Being Comes from Not-Being: Open-Vocabulary Text-to-Motion Generation with Wordless Training
87 Generating Human Motion from Textual Descriptions with Discrete Representations
Name: Title, dtype: object
---
Query: (text OR diffuse) AND (motion OR generation)
Postfix: ['text', 'diffuse', 'OR', 'motion', 'generation', 'OR', 'AND']
0 MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
2 AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
3 Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
5 HumanTOMATO: Text-aligned Whole-body Motion Generation
7 Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation
19 Plan, Posture and Go: Towards Open-World Text-to-Motion Generation
24 Being Comes from Not-Being: Open-Vocabulary Text-to-Motion Generation with Wordless Training
87 Generating Human Motion from Textual Descriptions with Discrete Representations
Name: Title, dtype: object
使用したデータ
最新(2024/06/23時点)の動作生成研究のタイトルですね.
Title
MotionDiffuse: Text-Driven Human Motion Generation With Diffusion Model
OmniControl: Control Any Joint at Any Time for Human Motion Generation
AttT2M: Text-Driven Human Motion Generation with Multi-Perspective Attention Mechanism
Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model
Priority-Centric Human Motion Generation in Discrete Latent Space
HumanTOMATO: Text-aligned Whole-body Motion Generation
InterGen: Diffusion-based Multi-human Motion Generation under Complex Interactions
Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation
Human Motion Generation: A Survey
CuRobo: Parallelized Collision-Free Robot Motion Generation
Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion Probabilistic Models
FineMoGen: Fine-Grained Spatio-Temporal Motion Generation and Editing
MotionCLIP: Exposing Human Motion Generation to CLIP Space
"EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Motion Generation"
HUMANISE: Language-conditioned Human Motion Generation in 3D Scenes
ROAM: Robust and Object-Aware Motion Generation Using Neural Pose Descriptors
Implicit Neural Representations for Variable Length Human Motion Generation
Taming Diffusion Models for Music-driven Conducting Motion Generation
Preview Control applied for humanoid robot motion generation
"Plan, Posture and Go: Towards Open-World Text-to-Motion Generation"
OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers
UDE: A Unified Driving Engine for Human Motion Generation
ActFormer: A GAN-based Transformer towards General Action-Conditioned 3D Human Motion Generation
PoseGPT: Quantization-based 3D Human Motion Generation and Forecasting
Being Comes from Not-Being: Open-Vocabulary Text-to-Motion Generation with Wordless Training
MultiAct: Long-Term 3D Human Motion Generation from Multiple Action Labels
ActFormer: A GAN Transformer Framework towards General Action-Conditioned 3D Human Motion Generation
Freeform Body Motion Generation from Speech
Versatile Motion Generation of Magnetic Origami Spring Robots in the Uniform Magnetic Field
Action-conditioned On-demand Motion Generation
HiT-DVAE: Human Motion Generation via Hierarchical Transformer Dynamical VAE
Design Framework for Motion Generation of Planar Four-Bar Linkage Considering Clearance Joints and Dynamics Performance
Dynamic Optimization Fabrics for Motion Generation
A Data-Driven Cyclic-Motion Generation Scheme for Kinematic Control of Redundant Manipulators
New Joint-Drift-Free Scheme Aided with Projected ZNN for Motion Generation of Redundant Robot Manipulators Perturbed by Disturbances
Composable energy policies for reactive motion generation and reinforcement learning
RNN for Repetitive Motion Generation of Redundant Robot Manipulators: An Orthogonal Projection-Based Scheme
ReLMoGen: Integrating Motion Generation in Reinforcement Learning for Mobile Manipulation
Spatial Attention Point Network for Deep-learning-based Robust Autonomous Robot Motion Generation
A deep learning framework for realistic robot motion generation
Graph-based Normalizing Flow for Human Motion Generation and Reconstruction
Human-Like Arm Motion Generation: A Review
ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for Mobile Manipulation
MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
Exploiting upper-limb functional principal components for human-like motion generation of anthropomorphic robots
DMP-Based Motion Generation for a Walking Exoskeleton Robot Using Reinforcement Learning
Diversified and Untethered Motion Generation Via Crease Patterning from Magnetically Actuated Caterpillar-Inspired Origami Robot
Motion Mamba: Efficient and Long Sequence Motion Generation with Hierarchical and Bidirectional Selective SSM
LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation
Cooperative Motion Generation in a Distributed Network of Redundant Robot Manipulators With Noises
MoStGAN-V: Video Generation with Temporal Motion Styles
Infinite Torsional Motion Generation of a Spherical Parallel Manipulator with Coaxial Input Axes
Dynamic Future Net: Diversified Human Motion Generation
Motion Generation Using Bilateral Control-Based Imitation Learning With Autoregressive Learning
Regularized Hierarchical Quadratic Program for Real-Time Whole-Body Motion Generation
Modelling of strong motion generation areas for a great earthquake in central seismic gap region of Himalayas using the modified semi-empirical approach
Adaptive motion generation using imitation learning and highly compliant end effector for autonomous cleaning
Human motion generation with StyleGAN
Simplification of Motion Generation in the Singular Configuration of a Wheel-Legged Mobile Robot
A novel planar motion generation method based on the synthesis of planetary gear train with noncircular gears
Motion Generation of a Wearable Hip Exoskeleton Robot Using Machine Learning-Based Estimation of Ground Reaction Forces and Moments
Extended Three-Dimensional Walking and Skating Motion Generation for Multiple Noncoplanar Contacts With Anisotropic Friction: Application to Walk and Skateboard and Roller Skate
Synthesis of four-bar linkage motion generation using optimization algorithms
Motion Generation Interface of ROS to PODO Software Framework for Wheeled Humanoid Robot
Time Series Motion Generation Considering Long Short-Term Motion
LaMD: Latent Motion Diffusion for Video Generation
Real-Time Perception Meets Reactive Motion Generation
Bimanual Assembly of Two Parts with Relative Motion Generation and Task Related Optimization
Skeleton-Aided Articulated Motion Generation
Dynamical System Based Robotic Motion Generation With Obstacle Avoidance
Developments in quantitative dimensional synthesis (1970-present): four-bar motion generation
Parallel autonomy in automated vehicles: Safe motion generation with minimal intervention
An Efficient Motion Generation Method for Redundant Humanoid Robot Arm Based on the Intrinsic Principles of Human Arm Motion
Motion Analysis in Vocalized Surprise Expressions and Motion Generation in Android Robots
Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning
A convex model of humanoid momentum dynamics for multi-contact motion generation
Structured contact force optimization for kino-dynamic motion generation
Collaborative Human-Robot Motion Generation Using LSTM-RNN
Motion generation of multi-legged robot in complex terrains by using estimation of distribution algorithm
Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation
Deterministic Generation and Guided Motion of Magnetic Skyrmions by Focused He+-Ion Irradiation
Safe and feasible motion generation for autonomous driving via constrained policy net
Actuator synchronization for adaptive motion generation without any sensor or microprocessor
Human Motion Generation via Cross-Space Constrained Sampling
Human character balancing motion generation based on a double inverted pendulum model
On the application of RRSS motion generation and RRSS Axode generation for the design of a concept prosthetic knee
An automation system for data processing and motion generation
Generating Human Motion from Textual Descriptions with Discrete Representations
Discrete-Time Zhang Neural Network for Online Time-Varying Nonlinear Optimization With Application to Manipulator Motion Generation
Open-source benchmarking for learned reaching motion generation in robotics
Optimal control for whole-body motion generation using center-of-mass dynamics for predefined multi-contact configurations
Comparison of remote center-of-motion generation algorithms
Multiple task optimization with a mixture of controllers for motion generation
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
Hierarchical quadratic programming: Fast online humanoid-robot motion generation
Argoverse 2: Next Generation Datasets for Self-Driving Perception and Forecasting
Human-Like Motion Generation and Control for Humanoid's Dual Arm Object Manipulation
Head Motion Generation with Synthetic Speech: A Data Driven Approach
Motion generation in android robots during laughing speech