250x250
반응형
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 | 31 |
Tags
- API
- UDF
- gather_nd
- GCP
- Retry
- integrated gradient
- hadoop
- top_k
- TensorFlow
- spark udf
- 공분산
- XAI
- correlation
- 유튜브 API
- subdag
- flask
- login crawling
- GenericGBQException
- API Gateway
- 상관관계
- Counterfactual Explanations
- session 유지
- youtube data
- requests
- BigQuery
- tensorflow text
- chatGPT
- grad-cam
- Airflow
- airflow subdag
Archives
- Today
- Total
데이터과학 삼학년
[tf 2.x] tf.keras 로 predict 결과 custom 하기 --> GCP ai-platform ( keyed model, serving_signature) 본문
Natural Language Processing
[tf 2.x] tf.keras 로 predict 결과 custom 하기 --> GCP ai-platform ( keyed model, serving_signature)
Dan-k 2020. 7. 8. 17:15반응형
In [29]:
import os
import urllib
import pandas as pd
import tensorflow as tf
from tensorflow.keras import Input, Model
from tensorflow.keras import optimizers
from tensorflow.keras.layers import (
Dense,
Embedding,
GRU
)
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
from sklearn.model_selection import train_test_split
In [2]:
## naver 영화데이터 불러오기
urllib.request.urlretrieve("https://raw.githubusercontent.com/e9t/nsmc/master/ratings_train.txt", filename="ratings_train.txt")
urllib.request.urlretrieve("https://raw.githubusercontent.com/e9t/nsmc/master/ratings_test.txt", filename="ratings_test.txt")
train_data = pd.read_table('ratings_train.txt')
test_data = pd.read_table('ratings_test.txt')
train_data['document'] = train_data['document'].apply(str)
X = train_data.iloc[:2000,:].document
y = train_data.iloc[:2000,:].label
In [3]:
## X,y 데이터정제하기 -> 학습데이터 sequence vetor -> padded_sequence (max_len에 맞춰) / 레이블 데이터 : 카테고리컬 변수로 바꿔줌
tokenizer = Tokenizer()
tokenizer.fit_on_texts(X)
sequences = tokenizer.texts_to_sequences(X) ### vovab을 만들고 각 단어마다 고유번호를 매긴 후 번호를 순서에 따라 매핑하는 방법
word_to_index = tokenizer.word_index
VOCAB_SIZE = len(word_to_index) + 1 ## padding 할때 필요한 0 index 추가
MAX_LEN = max(len(seq) for seq in sequences)
def encode_labels(sources):
classes = [source for source in sources]
one_hots = to_categorical(classes)
return one_hots
def create_sequences(texts, max_len=MAX_LEN):
sequences = tokenizer.texts_to_sequences(texts)
padded_sequences = pad_sequences(sequences, max_len, padding='post')
return padded_sequences
X_train, X_valid, y_train, y_valid = train_test_split(create_sequences(X), encode_labels(y), test_size=0.1, random_state=42)
n_classes=2
optimizer = optimizers.Adam(learning_rate=0.01)
In [4]:
## 모델 만들기 : RNN
def build_rnn_model(vocab_size, embed_dim, max_len, units, n_classes):
input_data = Input(shape=(max_len,), name="input_data", dtype=tf.int32)
embed = Embedding(vocab_size + 1, embed_dim, input_length=max_len, mask_zero=True, dtype=tf.float32, name='Embedding')(input_data)
gru = GRU(32, activation=tf.nn.relu, name="GRU")(embed)
output = Dense(2, activation=tf.nn.softmax, name="output")(gru)
model = Model(inputs=input_data, outputs=output)
return model
In [5]:
rnn_model = build_rnn_model(VOCAB_SIZE, 32, MAX_LEN, 32, 2)
In [6]:
rnn_model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
In [7]:
tf.keras.utils.plot_model(rnn_model, show_shapes=True, dpi=90)
Out[7]:
In [8]:
rnn_model.fit(X_train,y_train)
Out[8]:
In [9]:
rnn_model.predict(X_valid)
Out[9]:
SavedModel and serving signature¶
- Now save the model using tf.saved_model.save() into SavedModel format, not the older Keras H5 Format.
- This will add a serving signature which we can then inspect.
- The serving signature indicates exactly which input names and types are expected, and what will be output by the model
In [10]:
MODEL_EXPORT_PATH = './test_rnn_model/'
tf.saved_model.save(rnn_model, MODEL_EXPORT_PATH)
In [11]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {MODEL_EXPORT_PATH}
In [12]:
loaded_model = tf.keras.models.load_model(MODEL_EXPORT_PATH)
In [13]:
loaded_model.signatures
Out[13]:
컴파일한 모델 VS 저장후 다시 불러온 모델¶
- 컴파일한 오리지날 모델은 serving signature를 가지고 있지 않는다.
- 일단 모델을 저장해야 serving signature를 갖게 되어서 저장후 다시 불러온 다음 serving_signature를 지정하는 것이다.
In [14]:
loaded_model
Out[14]:
In [15]:
rnn_model
Out[15]:
- loaded model 에서는 inference_function이 predict 의 역할을 대신한다. 그리고 이것은 keras의 Model.predict() 함수와 비슷하다.
- 주목할 것은 output tensor의 name이 serving signature에 매칭된다는 것이다.
In [16]:
inference_function = loaded_model.signatures['serving_default']
In [17]:
inference_function # similar to keras.Model.predict()
Out[17]:
In [18]:
result = inference_function(tf.convert_to_tensor(X_valid))
print(result)
In [19]:
result['output']
Out[19]:
Keyed Serving Function¶
- Now we'll create a new serving function that accepts and outputs a unique instance key.
- We use the fact that a Keras Model(x) call actually runs a prediction.
- The training=False parameter is included only for clarity. Then we save the model as before but provide this function as our new serving signature.
In [20]:
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string), tf.TensorSpec([None, MAX_LEN], dtype=tf.int32)])
def keyed_prediction(key, data):
pred = loaded_model(data, training=False)
return {
'output': pred,
'key': key
}
In [21]:
KEYED_EXPORT_PATH = './keyed_test_rnn_model/'
loaded_model.save(KEYED_EXPORT_PATH, signatures={'serving_default': keyed_prediction})
In [22]:
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {KEYED_EXPORT_PATH}
In [23]:
keyed_model = tf.keras.models.load_model(KEYED_EXPORT_PATH)
In [24]:
keyed_model.predict(
{
'input_1': tf.convert_to_tensor([X_valid[0]], dtype=tf.int32),
'key': tf.constant("1번유저")
}
)
Out[24]:
In [30]:
os.environ["MODEL_LOCATION"] = KEYED_EXPORT_PATH
GCP ai-platform에 적용하기¶
- ai-platform에서는 serving_signature를 이용하여 prediction을 실행할 수 있으며, single signature 뿐만 아니라 dual signature도 지원한다.
- Google Cloud AI Platform online and batch prediction support multiple signatures, as does TFServing.
In [ ]:
!gcloud ai-platform models create test_keyed_model \
--regions us-central1
In [ ]:
!gcloud ai-platform versions create v2 \
--model test_keyed_model --origin ${MODEL_LOCATION} --staging-bucket gs://daehwan \
--runtime-version 2.1
In [31]:
with open("keyed_txt_input.json", "w") as file:
print('{"data": [494, 9251, 9252,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0], "key": "id_1234"}', file=file)
In [32]:
!gcloud ai-platform predict --model test_keyed_model --json-instances keyed_txt_input.json --version v2 --signature-name serving_default
위의 결과처럼 모델의 결과 뿐만 아니라 해당 key도 나오는 것을 볼 수 있다.¶
728x90
반응형
LIST
'Natural Language Processing' 카테고리의 다른 글
[Text Preprocessing] re를 활용한 문자(한글,영어), 숫자만 가져오기 (0) | 2020.07.21 |
---|---|
[TF 2.x] model layer에 text vectorization 단계를 넣기 (0) | 2020.07.15 |
Text classification using CloudML (jupyter notebook with tf.keras) (1) | 2020.06.29 |
bi-directional 어텐션 메카니즘 vs bi-directional 모델 (네이버 영화리뷰) (0) | 2020.06.23 |
Word Embedding (0) | 2020.06.17 |
Comments