250x250
반응형
Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
Tags
- airflow subdag
- API
- login crawling
- integrated gradient
- session 유지
- subdag
- GCP
- 유튜브 API
- gather_nd
- top_k
- 상관관계
- hadoop
- BigQuery
- tensorflow text
- 공분산
- requests
- UDF
- grad-cam
- TensorFlow
- API Gateway
- flask
- Counterfactual Explanations
- Retry
- Airflow
- youtube data
- chatGPT
- GenericGBQException
- spark udf
- XAI
- correlation
Archives
- Today
- Total
데이터과학 삼학년
tf.keras serving function을 이용한 feature transform 적용 방법 본문
반응형
In [1]:
import pandas as pd
import numpy as np
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
In [2]:
PROJECT_ID = "project_id"
BUCKET_NAME = "bucket_name"
REGION = "us-central1"
In [3]:
!gcloud config set project $PROJECT_ID
!gcloud config set compute/region $REGION
In [4]:
abalone_train = pd.read_csv(
"https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv",
names=["Length", "Diameter", "Height", "Whole weight", "Shucked weight",
"Viscera weight", "Shell weight", "Age"])
abalone_train.head()
Out[4]:
In [5]:
abalone_features = abalone_train.copy()
abalone_labels = abalone_features.pop('Age')
In [6]:
abalone_features = np.array(abalone_features)
abalone_features
Out[6]:
In [7]:
abalone_model = tf.keras.Sequential([
layers.Dense(64),
layers.Dense(1)
])
abalone_model.compile(loss = tf.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
In [8]:
abalone_model.fit(abalone_features, abalone_labels, epochs=10)
Out[8]:
In [9]:
abalone_test = abalone_features[10:15]
abalone_model.predict(abalone_test)
Out[9]:
In [10]:
column_lst = list(abalone_train).copy()
column_lst.remove('Age')
column_lst
Out[10]:
In [13]:
input_signature=[tf.TensorSpec([None], dtype=tf.string, name='user'),tf.TensorSpec([None, 7], dtype=tf.float64,name='data')]
이 아래 serving_func 함수 안을 바꾸면 됨!!!!!! ¶
1. data에서 feature engineering 및 normalization 해서 loaded_model에 넣으면 됨
2. 주의할 점은, data가 tensor라서 tensor 연산으로 FE, 노멀라이징 해야함
In [14]:
@tf.function(input_signature=input_signature)
def serving_func(user, data):
"""
여기서
raw data를
feature eng, normalization 해서
최종 data로 만들면 끝!
(tensor 연산으로 진행해야함)
"""
pred = loaded_model(data, training=False)
predictions = {
'user':user,
'pred':pred
}
return predictions
In [15]:
MODEL_EXPORT_PATH = "./"
tf.saved_model.save(abalone_model, MODEL_EXPORT_PATH)
In [16]:
loaded_model = tf.keras.models.load_model(MODEL_EXPORT_PATH)
In [17]:
SERVING_MODEL_EXPORT_PATH = './serving'
In [18]:
loaded_model(abalone_test)
Out[18]:
In [19]:
loaded_model.save(SERVING_MODEL_EXPORT_PATH, signatures={'serving_default': serving_func})
In [20]:
keyed_model = tf.keras.models.load_model(SERVING_MODEL_EXPORT_PATH)
In [21]:
keyed_model(abalone_test)
Out[21]:
In [22]:
abalone_test[1]
Out[22]:
In [23]:
import os
os.environ["MODEL_LOCATION"] = SERVING_MODEL_EXPORT_PATH
In [ ]:
## 최초 함수 시작시에만 실행
!gcloud ai-platform models create test_keyed_model \
--regions us-central1
In [24]:
!gcloud ai-platform versions create v12\
--model test_keyed_model --origin ${MODEL_LOCATION} --staging-bucket gs://daehwan \
--runtime-version 2.1
In [25]:
with open("keyed_input.json", "w") as file:
print('{"data": [1,2,3,4,5,6,7], "user": "id_1234"}', file=file)
In [26]:
!gcloud ai-platform predict --model test_keyed_model --json-instances keyed_input.json --version v12 --signature-name serving_default
728x90
반응형
LIST
'Machine Learning' 카테고리의 다른 글
[Clustering] DBSCAN (0) | 2021.01.01 |
---|---|
Anomaly Detection 종류(Point, Contextual, Collective) (0) | 2020.12.01 |
Hierarchical temporal memory (HTM networks) (0) | 2020.08.11 |
AutoML for Text Classification (0) | 2020.08.10 |
GANs 간략한 소개 (0) | 2020.08.10 |
Comments